Text-based Person Search (TBPS) aims to retrieve the person images using natural language descriptions. Recently, Contrastive Language Image Pretraining (CLIP), a universal large cross-modal vision-language pre-training model, has remarkably performed over various cross-modal downstream tasks due to its powerful cross-modal semantic learning capacity. TPBS, as a fine-grained cross-modal retrieval task, is also facing the rise of research on the CLIP-based TBPS. In order to explore the potential of the visual-language pre-training model for downstream TBPS tasks, this paper makes the first attempt to conduct a comprehensive empirical study of CLIP for TBPS and thus contribute a straightforward, incremental, yet strong TBPS-CLIP baseline to the TBPS community. We revisit critical design considerations under CLIP, including data augmentation and loss function. The model, with the aforementioned designs and practical training tricks, can attain satisfactory performance without any sophisticated modules. Also, we conduct the probing experiments of TBPS-CLIP in model generalization and model compression, demonstrating the effectiveness of TBPS-CLIP from various aspects. This work is expected to provide empirical insights and highlight future CLIP-based TBPS research.
翻译:文本行人检索(TBPS)旨在通过自然语言描述检索行人图像。近年来,对比语言-图像预训练(CLIP)作为一种通用的大规模跨模态视觉-语言预训练模型,凭借其强大的跨模态语义学习能力,在各类跨模态下游任务中表现卓越。作为一项细粒度跨模态检索任务,TPBS同样面临着基于CLIP的TBPS研究热潮。为探究视觉-语言预训练模型在下游TBPS任务中的潜力,本文首次对CLIP在TBPS中的应用进行了系统性的实证研究,从而为TBPS社区贡献了一个简洁、渐进且强大的TBPS-CLIP基线模型。我们重新审视了CLIP框架下的关键设计因素,包括数据增强与损失函数。结合上述设计及实用的训练技巧,该模型无需任何复杂模块即可取得令人满意的性能。此外,我们通过模型泛化与模型压缩的探测实验,从多个维度验证了TBPS-CLIP的有效性。本工作旨在提供实证洞见,并为未来基于CLIP的TBPS研究指明方向。