Clothing-change person re-identification (CC Re-ID) has attracted increasing attention in recent years due to its application prospect. Most existing works struggle to adequately extract the ID-related information from the original RGB images. In this paper, we propose an Identity-aware Feature Decoupling (IFD) learning framework to mine identity-related features. Particularly, IFD exploits a dual stream architecture that consists of a main stream and an attention stream. The attention stream takes the clothing-masked images as inputs and derives the identity attention weights for effectively transferring the spatial knowledge to the main stream and highlighting the regions with abundant identity-related information. To eliminate the semantic gap between the inputs of two streams, we propose a clothing bias diminishing module specific to the main stream to regularize the features of clothing-relevant regions. Extensive experimental results demonstrate that our framework outperforms other baseline models on several widely-used CC Re-ID datasets.
翻译:近年来,换装行人再识别(CC Re-ID)因其应用前景而受到越来越多的关注。现有大多数方法难以从原始RGB图像中充分提取与身份相关的信息。本文提出一种身份感知特征解耦(IFD)学习框架来挖掘身份相关特征。具体而言,IFD采用由主流和注意力流构成的双流架构。注意力流以衣物掩蔽图像作为输入,生成身份注意力权重,从而有效地将空间知识迁移至主流,并突出包含丰富身份信息的区域。为消除双流输入间的语义鸿沟,我们针对主流提出衣物偏置抑制模块,以规范化衣物相关区域的特征。大量实验结果表明,我们的框架在多个广泛使用的CC Re-ID数据集上优于其他基线模型。