Large Vision Language Models (LVLMs) have achieved significant progress in integrating visual and textual inputs for multimodal reasoning. However, a recurring challenge is ensuring these models utilize visual information as effectively as linguistic content when both modalities are necessary to formulate an accurate answer. We hypothesize that hallucinations arise due to the lack of effective visual grounding in current LVLMs. This issue extends to vision-language benchmarks, where it is difficult to make the image indispensable for accurate answer generation, particularly in vision question-answering tasks. In this work, we introduce FiVL, a novel method for constructing datasets designed to train LVLMs for enhanced visual grounding and to evaluate their effectiveness in achieving it. These datasets can be utilized for both training and assessing an LVLM's ability to use image content as substantive evidence rather than relying solely on linguistic priors, providing insights into the model's reliance on visual information. To demonstrate the utility of our dataset, we introduce an innovative training task that outperforms baselines alongside a validation method and application for explainability. The code is available at https://github.com/IntelLabs/fivl.
翻译:大型视觉语言模型(LVLMs)在整合视觉与文本输入以实现多模态推理方面取得了显著进展。然而,一个持续存在的挑战是确保这些模型在需要双模态信息才能生成准确答案时,能够像利用语言内容一样有效地利用视觉信息。我们假设,当前LVLMs中有效视觉基础能力的缺失导致了幻觉现象的产生。这一问题也延伸至视觉-语言基准测试中,特别是在视觉问答任务中,难以使图像成为生成准确答案不可或缺的部分。本文中,我们提出了FiVL,一种构建数据集的新方法,旨在训练LVLMs以增强视觉基础能力,并评估其实现该能力的有效性。这些数据集既可用于训练,也可用于评估LVLM将图像内容作为实质性证据而非仅依赖语言先验的能力,从而深入探究模型对视觉信息的依赖程度。为证明我们数据集的实用性,我们引入了一项创新的训练任务,该任务在基线方法上表现更优,同时提出了一种验证方法及其在可解释性方面的应用。代码发布于https://github.com/IntelLabs/fivl。