Multimodal large language models (MLLMs) have recently shown significant advancements in video understanding, excelling in content reasoning and instruction-following tasks. However, hallucination, where models generate inaccurate or misleading content, remains underexplored in the video domain. Building on the observation that MLLM visual encoders often fail to distinguish visually different yet semantically similar video pairs, we introduce VidHalluc, the largest benchmark designed to examine hallucinations in MLLMs for video understanding. It consists of 5,002 videos, paired to highlight cases prone to hallucinations. VidHalluc assesses hallucinations across three critical dimensions: (1) action, (2) temporal sequence, and (3) scene transition. Comprehensive testing shows that most MLLMs are vulnerable to hallucinations across these dimensions. Furthermore, we propose DINO-HEAL, a training-free method that reduces hallucinations by incorporating spatial saliency from DINOv2 to reweight visual features during inference. Our results show that DINO-HEAL consistently improves performance on VidHalluc, achieving an average improvement of 3.02% in mitigating hallucinations across all tasks. Both the VidHalluc benchmark and DINO-HEAL code are available at https://people-robots.github.io/vidhalluc.
翻译:多模态大语言模型(MLLMs)近期在视频理解领域展现出显著进展,在内容推理和指令跟随任务上表现出色。然而,模型生成不准确或误导性内容的幻觉问题在视频领域仍未得到充分探索。基于我们观察到MLLM视觉编码器往往难以区分视觉上不同但语义相似的视频对,我们提出了VidHalluc——这是目前规模最大的基准,旨在系统检验MLLMs在视频理解中的幻觉现象。该基准包含5,002个视频,通过配对设计突出易产生幻觉的案例。VidHalluc从三个关键维度评估幻觉:(1) 动作,(2) 时序关系,以及(3) 场景转换。全面测试表明,大多数MLLMs在这些维度上都容易产生幻觉。此外,我们提出了DINO-HEAL,一种无需训练的方法,通过引入DINOv2的空间显著性来在推理过程中重新加权视觉特征,从而减少幻觉。实验结果表明,DINO-HEAL能持续提升在VidHalluc上的性能,在所有任务中缓解幻觉的平均改进率达到3.02%。VidHalluc基准与DINO-HEAL代码均已公开于https://people-robots.github.io/vidhalluc。