Large Vision-Language Models (LVLMs) are increasingly adept at generating contextually detailed and coherent responses from visual inputs. However, their application in multimodal decision-making and open-ended generation is hindered by a notable rate of hallucinations, where generated text inaccurately represents the visual contents. To address this issue, this paper introduces the Instruction Contrastive Decoding (ICD) method, a novel approach designed to reduce hallucinations during LVLM inference. Our method is inspired by our observation that what we call disturbance instructions significantly exacerbate hallucinations in multimodal fusion modules. ICD contrasts distributions from standard and instruction disturbance, thereby increasing alignment uncertainty and effectively subtracting hallucinated concepts from the original distribution. Through comprehensive experiments on discriminative benchmarks (POPE and MME) and a generative benchmark (LLaVa-Bench), we demonstrate that ICD significantly mitigates both object-level and attribute-level hallucinations. Moreover, our method not only addresses hallucinations but also significantly enhances the general perception and recognition capabilities of LVLMs.
翻译:大型视觉语言模型(LVLMs)在从视觉输入中生成上下文详尽且连贯的响应方面日益表现出色。然而,它们在多模态决策制定和开放式生成中的应用受到显著幻觉率的阻碍,即生成的文本未能准确反映视觉内容。为解决这一问题,本文引入了指令对比解码(ICD)方法,这是一种旨在降低LVLM推理过程中幻觉的新型方法。我们的方法受到以下观察的启发:我们所谓的干扰指令会显著加剧多模态融合模块中的幻觉。ICD对比标准分布与指令干扰分布,从而增加对齐不确定性,并有效从原始分布中减去幻觉概念。通过在判别性基准(POPE和MME)和生成性基准(LLaVa-Bench)上进行全面实验,我们证明ICD显著减轻了物体级和属性级的幻觉。此外,我们的方法不仅解决了幻觉问题,还显著增强了LVLM的通用感知和识别能力。