Visible-infrared person re-identification (VIReID) retrieves pedestrian images with the same identity across different modalities. Existing methods learn visual content solely from images, lacking the capability to sense high-level semantics. In this paper, we propose an Embedding and Enriching Explicit Semantics (EEES) framework to learn semantically rich cross-modality pedestrian representations. Our method offers several contributions. First, with the collaboration of multiple large language-vision models, we develop Explicit Semantics Embedding (ESE), which automatically supplements language descriptions for pedestrians and aligns image-text pairs into a common space, thereby learning visual content associated with explicit semantics. Second, recognizing the complementarity of multi-view information, we present Cross-View Semantics Compensation (CVSC), which constructs multi-view image-text pair representations, establishes their many-to-many matching, and propagates knowledge to single-view representations, thus compensating visual content with its missing cross-view semantics. Third, to eliminate noisy semantics such as conflicting color attributes in different modalities, we design Cross-Modality Semantics Purification (CMSP), which constrains the distance between inter-modality image-text pair representations to be close to that between intra-modality image-text pair representations, further enhancing the modality-invariance of visual content. Finally, experimental results demonstrate the effectiveness and superiority of the proposed EEES.
翻译:可见光-红外行人重识别(VIReID)旨在跨不同模态检索具有相同身份的行人图像。现有方法仅从图像中学习视觉内容,缺乏感知高层语义的能力。本文提出一种嵌入与丰富显式语义(EEES)框架,以学习语义丰富的跨模态行人表征。我们的方法具有以下贡献:首先,通过多个大型语言-视觉模型的协作,我们开发了显式语义嵌入(ESE),该模块自动为行人补充语言描述,并将图文对对齐到共同空间中,从而学习与显式语义关联的视觉内容。其次,认识到多视角信息的互补性,我们提出跨视角语义补偿(CVSC),该模块构建多视角图文对表征,建立其多对多匹配关系,并将知识传播至单视角表征,从而以缺失的跨视角语义补偿视觉内容。第三,为消除不同模态间的噪声语义(如相互冲突的颜色属性),我们设计了跨模态语义净化(CMSP),该模块约束跨模态图文对表征间的距离接近模态内图文对表征间的距离,进一步增强视觉内容的模态不变性。最后,实验结果验证了所提EEES方法的有效性和优越性。