Though advanced in understanding visual information with human languages, Large Vision-Language Models (LVLMs) still suffer from multimodal hallucinations. A natural concern is that during multimodal interaction, the generated hallucinations could influence the LVLMs' subsequent generation. Thus, we raise a question: When presented with a query relevant to the previously generated hallucination, will LVLMs be misled and respond incorrectly, even though the ground visual information exists? To answer this, we propose a framework called MMHalSnowball to evaluate LVLMs' behaviors when encountering generated hallucinations, where LVLMs are required to answer specific visual questions within a curated hallucinatory conversation. Crucially, our experiment shows that the performance of open-source LVLMs drops by at least $31\%$, indicating that LVLMs are prone to accept the generated hallucinations and make false claims that they would not have supported without distractions. We term this phenomenon Multimodal Hallucination Snowballing. To mitigate this, we further propose a training-free method called Residual Visual Decoding, where we revise the output distribution of LVLMs with the one derived from the residual visual input, providing models with direct access to the visual information. Experiments show that our method can mitigate more than $24\%$ of the snowballed multimodal hallucination while maintaining capabilities.
翻译:尽管大型视觉语言模型(LVLMs)在理解视觉信息与人类语言方面已取得显著进展,但其仍受多模态幻觉问题困扰。一个值得关注的担忧是:在多模态交互过程中,已生成的幻觉可能会影响LVLMs的后续生成。因此,我们提出一个问题:当遇到与先前生成幻觉相关的查询时,即使存在真实的视觉信息,LVLMs是否会被误导并作出错误响应?为探究此问题,我们提出了名为MMHalSnowball的评估框架,用于检验LVLMs在遭遇生成幻觉时的行为表现。该框架要求LVLMs在精心构建的幻觉对话中回答特定视觉问题。关键实验结果表明,开源LVLMs的性能下降至少$31\%$,这证明LVLMs易受生成幻觉影响,并在无干扰情况下本不会支持的虚假主张上作出错误判断。我们将此现象称为多模态幻觉雪球效应。为缓解该问题,我们进一步提出一种免训练方法——残差视觉解码。该方法通过残差视觉输入推导的分布修正LVLMs的输出分布,使模型能够直接获取视觉信息。实验表明,本方法在保持模型能力的同时,可减少超过$24\%$的雪球式多模态幻觉。