With the rapid advancement of Large Models, numerous text-and-vision-fused Multimodal Large Models (MLMs) have emerged. However, these MLMs remain susceptible to informational interference in visual perception, particularly in color perception, which introduces an additional risk of hallucination. To validate this hypothesis, we introduce the "What Color Is It" dataset, a novel benchmark constructed using a simple method to trigger single-modality visual hallucination in MLMs. Based on this dataset, we further investigate the underlying causes of hallucination in the visual modality of MLMs and propose potential solutions to enhance their robustness.
翻译:随着大模型的快速发展,大量文本与视觉融合的多模态大模型(MLMs)相继涌现。然而,这些MLMs在视觉感知方面仍易受信息干扰,尤其是在颜色感知上,这引入了额外的幻觉风险。为验证这一假设,我们提出了“这是什么颜色”数据集,这是一个通过简单方法构建的新型基准,用于触发MLMs中的单模态视觉幻觉。基于该数据集,我们进一步探究了MLMs视觉模态中幻觉产生的根本原因,并提出了增强其鲁棒性的潜在解决方案。