Multimodal retrieval systems typically employ Vision Language Models (VLMs) that encode images and text independently into vectors within a shared embedding space. Despite incorporating text encoders, VLMs consistently underperform specialized text models on text-only retrieval tasks. Moreover, introducing additional text encoders increases storage, inference overhead, and exacerbates retrieval inefficiencies, especially in multilingual settings. To address these limitations, we propose a multi-task learning framework that unifies the feature representation across images, long and short texts, and intent-rich queries. To our knowledge, this is the first work to jointly optimize multilingual image retrieval, text retrieval, and natural language understanding (NLU) tasks within a single framework. Our approach integrates image and text retrieval with a shared text encoder that is enhanced by NLU features for intent understanding and retrieval accuracy.
翻译:多模态检索系统通常采用视觉语言模型,将图像和文本独立编码到共享嵌入空间中的向量。尽管融入了文本编码器,视觉语言模型在纯文本检索任务上始终表现不及专用文本模型。此外,引入额外的文本编码器会增加存储开销、推理负担,并加剧检索效率低下问题,尤其是在多语言场景中。为应对这些局限,我们提出了一种多任务学习框架,统一了图像、长短文本以及富含意图的查询的特征表示。据我们所知,这是首个在单一框架内联合优化多语言图像检索、文本检索和自然语言理解任务的研究。我们的方法通过一个共享的文本编码器整合图像与文本检索,该编码器利用自然语言理解特征进行意图理解,从而提升检索准确性。