The pre-trained vision and language (V\&L) models have substantially improved the performance of cross-modal image-text retrieval. In general, however, V\&L models have limited retrieval performance for small objects because of the rough alignment between words and the small objects in the image. In contrast, it is known that human cognition is object-centric, and we pay more attention to important objects, even if they are small. To bridge this gap between the human cognition and the V\&L model's capability, we propose a cross-modal image-text retrieval framework based on ``object-aware query perturbation.'' The proposed method generates a key feature subspace of the detected objects and perturbs the corresponding queries using this subspace to improve the object awareness in the image. In our proposed method, object-aware cross-modal image-text retrieval is possible while keeping the rich expressive power and retrieval performance of existing V\&L models without additional fine-tuning. Comprehensive experiments on four public datasets show that our method outperforms conventional algorithms. Our code is publicly available at \url{https://github.com/NEC-N-SOGI/query-perturbation}.
翻译:预训练的视觉与语言(V&L)模型显著提升了跨模态图像-文本检索的性能。然而,由于图像中的小物体与对应词语之间仅存在粗略对齐,V&L模型通常对小物体的检索性能有限。相比之下,人类认知以对象为中心,即使物体尺寸较小,我们也会更加关注重要对象。为弥合人类认知与V&L模型能力之间的差距,我们提出了一种基于“面向对象的查询扰动”的跨模态图像-文本检索框架。该方法通过检测到的对象生成关键特征子空间,并利用该子空间对相应查询进行扰动,以增强图像中的对象感知能力。我们提出的方法能够在保持现有V&L模型丰富表达能力与检索性能的同时,实现面向对象的跨模态图像-文本检索,且无需额外微调。在四个公开数据集上的综合实验表明,本方法优于传统算法。相关代码已公开于 \url{https://github.com/NEC-N-SOGI/query-perturbation}。