Text-Based Person Search (TBPS) is a crucial task in the Internet of Things (IoT) domain that enables accurate retrieval of target individuals from large-scale galleries with only given textual caption. For cross-modal TBPS tasks, it is critical to obtain well-distributed representation in the common embedding space to reduce the inter-modal gap. Furthermore, learning detailed image-text correspondences is essential to discriminate similar targets and enable fine-grained search. To address these challenges, we present a simple yet effective method named Sew Calibration and Masked Modeling (SCMM) that calibrates cross-modal representations by learning compact and well-aligned embeddings. SCMM introduces two novel losses for fine-grained cross-modal representations: Sew calibration loss that aligns image and text features based on textual caption quality, and Masked Caption Modeling (MCM) loss that establishes detailed relationships between textual and visual parts. This dual-pronged strategy enhances feature alignment and cross-modal correspondences, enabling accurate distinction of similar individuals while maintaining a streamlined dual-encoder architecture for real-time inference, which is essential for resource-limited sensors and IoT systems. Extensive experiments on three popular TBPS benchmarks demonstrate the superiority of SCMM, achieving 73.81%, 64.25%, and 57.35% Rank-1 accuracy on CUHK-PEDES, ICFG-PEDES, and RSTPReID, respectively.
翻译:基于文本的行人检索是物联网领域的一项关键任务,其目标是在仅给定文本描述的情况下,从大规模图库中准确检索出目标个体。对于跨模态的TBPS任务,在公共嵌入空间中获得分布良好的表示以减少模态间差异至关重要。此外,学习详细的图像-文本对应关系对于区分相似目标并实现细粒度检索也极为重要。为应对这些挑战,我们提出了一种简单而有效的方法,称为缝合校准与掩码建模,该方法通过学习紧凑且对齐良好的嵌入来校准跨模态表示。SCMM引入了两种新颖的损失函数用于细粒度跨模态表示学习:基于文本描述质量对齐图像与文本特征的缝合校准损失,以及建立文本部分与视觉部分之间详细关联的掩码描述建模损失。这种双管齐下的策略增强了特征对齐与跨模态对应关系,能够准确区分相似个体,同时保持了精简的双编码器架构以实现实时推理,这对于资源有限的传感器和物联网系统至关重要。在三个主流TBPS基准数据集上的大量实验证明了SCMM的优越性,其在CUHK-PEDES、ICFG-PEDES和RSTPReID数据集上分别达到了73.81%、64.25%和57.35%的Rank-1准确率。