The pre-trained vision and language (V\&L) models have substantially improved the performance of cross-modal image-text retrieval. In general, however, V\&L models have limited retrieval performance for small objects because of the rough alignment between words and the small objects in the image. In contrast, it is known that human cognition is object-centric, and we pay more attention to important objects, even if they are small. To bridge this gap between the human cognition and the V\&L model's capability, we propose a cross-modal image-text retrieval framework based on ``object-aware query perturbation.'' The proposed method generates a key feature subspace of the detected objects and perturbs the corresponding queries using this subspace to improve the object awareness in the image. In our proposed method, object-aware cross-modal image-text retrieval is possible while keeping the rich expressive power and retrieval performance of existing V\&L models without additional fine-tuning. Comprehensive experiments on four public datasets show that our method outperforms conventional algorithms.
翻译:预训练的视觉与语言(V&L)模型已显著提升了跨模态图像-文本检索的性能。然而,由于词语与图像中小物体之间的粗略对齐,V&L模型通常对小物体的检索性能有限。相比之下,人类认知以对象为中心,我们会更多地关注重要物体,即使它们尺寸较小。为弥合人类认知与V&L模型能力之间的差距,我们提出了一种基于“面向对象的查询扰动”的跨模态图像-文本检索框架。该方法通过检测到的物体生成关键特征子空间,并利用该子空间对相应查询进行扰动,以增强图像中的对象感知能力。在我们提出的方法中,无需额外微调即可实现面向对象的跨模态图像-文本检索,同时保持现有V&L模型的丰富表达能力和检索性能。在四个公开数据集上的综合实验表明,本方法优于传统算法。