Retrieval-based multi-image question answering (QA) task involves retrieving multiple question-related images and synthesizing these images to generate an answer. Conventional "retrieve-then-answer" pipelines often suffer from cascading errors because the training objective of QA fails to optimize the retrieval stage. To address this issue, we propose a novel method to effectively introduce and reference retrieved information into the QA. Given the image set to be retrieved, we employ a multimodal large language model (visual perspective) and a large language model (textual perspective) to obtain multimodal hypothetical summary in question-form and description-form. By combining visual and textual perspectives, MHyS captures image content more specifically and replaces real images in retrieval, which eliminates the modality gap by transforming into text-to-text retrieval and helps improve retrieval. To more advantageously introduce retrieval with QA, we employ contrastive learning to align queries (questions) with MHyS. Moreover, we propose a coarse-to-fine strategy for calculating both sentence-level and word-level similarity scores, to further enhance retrieval and filter out irrelevant details. Our approach achieves a 3.7% absolute improvement over state-of-the-art methods on RETVQA and a 14.5% improvement over CLIP. Comprehensive experiments and detailed ablation studies demonstrate the superiority of our method.
翻译:基于检索的多图像问答任务涉及检索多个与问题相关的图像,并综合这些图像生成答案。传统的"检索-回答"流程常因问答训练目标未能优化检索阶段而出现级联误差。为解决此问题,我们提出一种创新方法,将检索信息有效引入并参考至问答过程。针对待检索图像集,我们采用多模态大语言模型(视觉视角)和大语言模型(文本视角)生成问题形式与描述形式的多模态假设摘要。通过融合视觉与文本视角,MHyS能更精准捕捉图像内容,在检索过程中替代真实图像,通过转化为文本到文本检索消除模态差异,从而提升检索性能。为更优化地实现检索与问答的协同,我们采用对比学习对齐查询问题与MHyS。此外,提出从粗到细的策略计算句子级与词汇级相似度分数,以进一步增强检索效果并过滤无关细节。本方法在RETVQA数据集上相较前沿技术实现3.7%的绝对提升,较CLIP提升14.5%。综合实验与详细消融研究验证了本方法的优越性。