Cloth-changing person re-identification (CC-ReID) poses a significant challenge in computer vision. A prevailing approach is to prompt models to concentrate on causal attributes, like facial features and hairstyles, rather than confounding elements such as clothing appearance. Traditional methods to achieve this involve integrating multi-modality data or employing manually annotated clothing labels, which tend to complicate the model and require extensive human effort. In our study, we demonstrate that simply reducing feature correlations during training can significantly enhance the baseline model's performance. We theoretically elucidate this effect and introduce a novel regularization technique based on density ratio estimation. This technique aims to minimize feature correlation in the training process of cloth-changing ReID baselines. Our approach is model-independent, offering broad enhancements without needing additional data or labels. We validate our method through comprehensive experiments on prevalent CC-ReID datasets, showing its effectiveness in improving baseline models' generalization capabilities.
翻译:衣物更换行人重识别(CC-ReID)是计算机视觉领域的一项重大挑战。当前主流方法旨在引导模型关注因果属性(如面部特征与发型),而非混淆因素(如衣着外观)。实现该目标的传统方法通常需要整合多模态数据或使用人工标注的衣物标签,这往往导致模型复杂度增加且需耗费大量人力。本研究证明,在训练过程中仅通过降低特征相关性即可显著提升基线模型性能。我们从理论上阐释了这一效应,并提出一种基于密度比估计的新型正则化技术。该技术致力于在衣物更换ReID基线模型的训练过程中最小化特征相关性。我们的方法具有模型无关性,无需额外数据或标签即可实现广泛性能提升。通过在主流CC-ReID数据集上的系统实验验证,本方法能有效增强基线模型的泛化能力。