Text-to-image retrieval (T2I retrieval) remains challenging because cross-modal embeddings often behave as bags of concepts, underrepresenting structured visual relationships such as pose and viewpoint. We propose Visualize-then-Retrieve (VisRet), a retrieval paradigm that mitigates this limitation of cross-modal similarity alignment. VisRet first projects textual queries into the image modality via T2I generation, then performs retrieval within the image modality to bypass the weaknesses of cross-modal retrievers in recognizing subtle visual-spatial features. Across four benchmarks (Visual-RAG, INQUIRE-Rerank, Microsoft COCO, and our new Visual-RAG-ME featuring multi-entity comparisons), VisRet substantially outperforms cross-modal similarity matching and baselines that recast T2I retrieval as text-to-text similarity matching, improving nDCG@30 by 0.125 on average with CLIP as the retriever and by 0.121 with E5-V. For downstream question answering, VisRet increases accuracy on Visual-RAG and Visual-RAG-ME by 3.8% and 15.7% in top-1 retrieval, and by 3.9% and 11.1% in top-10 retrieval. Ablation studies show compatibility with different T2I instruction LLMs, T2I generation models, and downstream LLMs. VisRet provides a simple yet effective perspective for advancing in text-image retrieval. Our code and the new benchmark are publicly available at https://github.com/xiaowu0162/Visualize-then-Retrieve.
翻译:文本到图像检索(T2I检索)仍然具有挑战性,因为跨模态嵌入通常表现为概念袋,未能充分表征姿态和视角等结构化视觉关系。我们提出了“先可视化后检索”(VisRet)这一检索范式,以缓解跨模态相似性对齐的这一局限性。VisRet首先通过文本到图像生成将文本查询投影到图像模态中,然后在图像模态内执行检索,从而规避跨模态检索器在识别细微视觉空间特征方面的弱点。在四个基准测试(Visual-RAG、INQUIRE-Rerank、Microsoft COCO以及我们新推出的包含多实体比较的Visual-RAG-ME)上,VisRet显著优于跨模态相似性匹配以及将T2I检索重新定义为文本到文本相似性匹配的基线方法。当使用CLIP作为检索器时,VisRet将nDCG@30平均提升了0.125;使用E5-V时,平均提升了0.121。在下游问答任务中,VisRet在Visual-RAG和Visual-RAG-ME上的top-1检索准确率分别提高了3.8%和15.7%,top-10检索准确率分别提高了3.9%和11.1%。消融研究表明,该方法与不同的T2I指令大语言模型、T2I生成模型以及下游大语言模型均具有良好的兼容性。VisRet为推进文本-图像检索提供了一个简单而有效的视角。我们的代码和新基准测试已在https://github.com/xiaowu0162/Visualize-then-Retrieve 公开。