The rapid development of Large Multimodal Models (LMMs) has significantly advanced multimodal understanding by harnessing the language abilities of Large Language Models (LLMs) and integrating modality-specific encoders. However, LMMs are plagued by hallucinations that limit their reliability and adoption. While traditional methods to detect and mitigate these hallucinations often involve costly training or rely heavily on external models, recent approaches utilizing internal model features present a promising alternative. In this paper, we critically assess the limitations of the state-of-the-art training-free technique, the logit lens, in handling generalized visual hallucinations. We introduce a refined method that leverages contextual token embeddings from middle layers of LMMs. This approach significantly improves hallucination detection and grounding across diverse categories, including actions and OCR, while also excelling in tasks requiring contextual understanding, such as spatial relations and attribute comparison. Our novel grounding technique yields highly precise bounding boxes, facilitating a transition from Zero-Shot Object Segmentation to Grounded Visual Question Answering. Our contributions pave the way for more reliable and interpretable multimodal models.
翻译:大型多模态模型(LMMs)的快速发展,通过利用大型语言模型(LLMs)的语言能力并整合模态特定编码器,显著推进了多模态理解。然而,LMMs普遍存在的幻觉问题限制了其可靠性与实际应用。尽管检测和缓解这些幻觉的传统方法通常涉及昂贵的训练或严重依赖外部模型,但最近利用模型内部特征的方法提供了一种有前景的替代方案。本文批判性地评估了当前最先进的无训练技术——logit lens——在处理广义视觉幻觉方面的局限性。我们提出了一种改进方法,该方法利用LMMs中间层产生的上下文令牌嵌入。此方法显著提升了跨多种类别(包括动作和OCR)的幻觉检测与定位能力,同时在需要上下文理解的任务(如空间关系和属性比较)中也表现出色。我们新颖的定位技术能生成高度精确的边界框,从而促进了从零样本目标分割到基于定位的视觉问答的过渡。我们的贡献为开发更可靠、可解释的多模态模型铺平了道路。