Recent advancements in Large Vision Language Models (LVLMs) have revolutionized how machines understand and generate textual responses based on visual inputs, yet they often produce "hallucinatory" outputs that misinterpret visual information, posing challenges in reliability and trustworthiness. We propose RITUAL, a simple decoding method that reduces hallucinations by leveraging randomly transformed images as complementary inputs during decoding, adjusting the output probability distribution without additional training or external models. Our key insight is that random transformations expose the model to diverse visual perspectives, enabling it to correct misinterpretations that lead to hallucinations. Specifically, when a model hallucinates based on the original image, the transformed images -- altered in aspects such as orientation, scale, or color -- provide alternative viewpoints that help recalibrate the model's predictions. By integrating the probability distributions from both the original and transformed images, RITUAL effectively reduces hallucinations. To further improve reliability and address potential instability from arbitrary transformations, we introduce RITUAL+, an extension that selects image transformations based on self-feedback from the LVLM. Instead of applying transformations randomly, RITUAL+ uses the LVLM to evaluate and choose transformations that are most beneficial for reducing hallucinations in a given context. This self-adaptive approach mitigates the potential negative impact of certain transformations on specific tasks, ensuring more consistent performance across different scenarios. Experiments demonstrate that RITUAL and RITUAL+ significantly reduce hallucinations across several object hallucination benchmarks.
翻译:近年来,大型视觉语言模型(LVLM)的进展彻底改变了机器基于视觉输入理解和生成文本响应的方式,然而它们常常产生“幻觉”输出,即错误解读视觉信息,这对可靠性和可信度构成了挑战。我们提出RITUAL,一种简单的解码方法,通过在解码过程中利用随机变换的图像作为补充输入来减少幻觉,从而调整输出概率分布,无需额外训练或外部模型。我们的核心见解是,随机变换使模型接触到多样化的视觉视角,使其能够纠正导致幻觉的误解。具体而言,当模型基于原始图像产生幻觉时,在方向、尺度或颜色等方面经过改变的变换图像提供了替代视角,有助于重新校准模型的预测。通过整合原始图像和变换图像的概率分布,RITUAL有效减少了幻觉。为了进一步提高可靠性并解决任意变换可能带来的不稳定性,我们引入了RITUAL+,这是一种基于LVLM自反馈选择图像变换的扩展方法。RITUAL+不再随机应用变换,而是利用LVLM评估并选择在给定上下文中最有利于减少幻觉的变换。这种自适应方法减轻了特定变换对某些任务的潜在负面影响,确保了不同场景下更一致的性能。实验表明,RITUAL和RITUAL+在多个物体幻觉基准测试中显著减少了幻觉。