Large vision-language models (LVLMs) have made substantial progress in integrating large language models (LLMs) with visual inputs, enabling advanced multimodal reasoning. Despite their success, a persistent challenge is hallucination-where generated text fails to accurately reflect visual content-undermining both accuracy and reliability. Existing methods focus on alignment training or decoding refinements but primarily address symptoms at the generation stage without probing the underlying causes. In this work, we investigate the internal mechanisms driving hallucination in LVLMs, with an emphasis on the multi-head attention module. Specifically, we introduce Vision-aware Head Divergence (VHD), a metric that quantifies the sensitivity of attention head outputs to visual context. Based on this, our findings reveal the presence of vision-aware attention heads that are more attuned to visual information; however, the model's overreliance on its prior language patterns is closely related to hallucinations. Building on these insights, we propose Vision-aware Head Reinforcement (VHR), a training-free approach to mitigate hallucination by enhancing the role of vision-aware attention heads. Extensive experiments demonstrate that our method achieves superior performance compared to state-of-the-art approaches in mitigating hallucinations, while maintaining high efficiency with negligible additional time overhead.
翻译:大型视觉语言模型(LVLM)在整合大语言模型(LLM)与视觉输入方面取得了显著进展,实现了先进的多模态推理。尽管成果显著,一个持续存在的挑战是幻觉问题——即生成的文本未能准确反映视觉内容——这损害了模型的准确性和可靠性。现有方法主要关注对齐训练或解码优化,但大多仅针对生成阶段的表象进行处理,而未深入探究其根本原因。本研究致力于探索驱动LVLM产生幻觉的内部机制,重点关注多头注意力模块。具体而言,我们提出了视觉感知注意力头差异(VHD)这一度量指标,用于量化注意力头输出对视觉上下文的敏感程度。基于此指标,我们的研究发现存在对视觉信息更为敏感的视觉感知注意力头;然而,模型对其先验语言模式的过度依赖与幻觉现象密切相关。基于这些发现,我们提出了视觉感知注意力头增强(VHR),这是一种无需额外训练的方法,通过强化视觉感知注意力头的作用来缓解幻觉问题。大量实验表明,在减轻幻觉方面,我们的方法相比现有最先进技术取得了更优的性能,同时保持了高效率,仅带来可忽略不计的额外时间开销。