While large vision-language models (LVLMs) have shown impressive capabilities in generating plausible responses correlated with input visual contents, they still suffer from hallucinations, where the generated text inaccurately reflects visual contents. To address this, recent approaches apply contrastive decoding to calibrate the model's response via contrasting output distributions with original and visually distorted samples, demonstrating promising hallucination mitigation in a training-free manner. However, the potential of changing information in visual inputs is not well-explored, so a deeper investigation into the behaviors of visual contrastive decoding is of great interest. In this paper, we first explore various methods for contrastive decoding to change visual contents, including image downsampling and editing. Downsampling images reduces the detailed textual information while editing yields new contents in images, providing new aspects as visual contrastive samples. To further study benefits by using different contrastive samples, we analyze probability-level metrics, including entropy and distribution distance. Interestingly, the effect of these samples in mitigating hallucinations varies a lot across LVLMs and benchmarks. Based on our analysis, we propose a simple yet effective method to combine contrastive samples, offering a practical solution for applying contrastive decoding across various scenarios. Extensive experiments are conducted to validate the proposed fusion method among different benchmarks.
翻译:尽管大型视觉语言模型(LVLMs)在生成与输入视觉内容相关的合理响应方面展现出令人印象深刻的能力,但它们仍受幻觉问题困扰,即生成的文本未能准确反映视觉内容。为解决此问题,近期研究采用对比解码方法,通过对比原始样本与视觉失真样本的输出分布来校准模型响应,以无需训练的方式在幻觉缓解方面展现出良好前景。然而,视觉输入信息变化的潜力尚未得到充分探索,因此对视觉对比解码行为进行深入研究具有重要意义。本文首先探索了多种改变视觉内容的对比解码方法,包括图像下采样和编辑。图像下采样减少了细节文本信息,而编辑则生成图像新内容,为视觉对比样本提供了新维度。为深入研究不同对比样本的效益,我们分析了概率层面的度量指标,包括熵和分布距离。有趣的是,这些样本在缓解幻觉方面的效果在不同LVLMs和基准测试中差异显著。基于分析,我们提出一种简单有效的对比样本融合方法,为跨场景应用对比解码提供了实用解决方案。我们在多个基准测试中进行了广泛实验以验证所提出的融合方法。