The hallucination of large multimodal models (LMMs), providing responses that appear correct but are actually incorrect, limits their reliability and applicability. This paper aims to study the hallucination problem of LMMs in video modality, which is dynamic and more challenging compared to static modalities like images and text. From this motivation, we first present a comprehensive benchmark termed HAVEN for evaluating hallucinations of LMMs in video understanding tasks. It is built upon three dimensions, i.e., hallucination causes, hallucination aspects, and question formats, resulting in 6K questions. Then, we quantitatively study 7 influential factors on hallucinations, e.g., duration time of videos, model sizes, and model reasoning, via experiments of 16 LMMs on the presented benchmark. In addition, inspired by recent thinking models like OpenAI o1, we propose a video-thinking model to mitigate the hallucinations of LMMs via supervised reasoning fine-tuning (SRFT) and direct preference optimization (TDPO)-- where SRFT enhances reasoning capabilities while TDPO reduces hallucinations in the thinking process. Extensive experiments and analyses demonstrate the effectiveness. Remarkably, it improves the baseline by 7.65% in accuracy on hallucination evaluation and reduces the bias score by 4.5%. The code and data are public at https://github.com/Hongcheng-Gao/HAVEN.
翻译:大型多模态模型(LMMs)的幻觉问题——即生成看似正确实则错误的回答——限制了其可靠性与适用性。本文旨在研究LMMs在动态且相较于图像、文本等静态模态更具挑战性的视频模态中的幻觉问题。基于此动机,我们首先提出了一个名为HAVEN的综合基准,用于评估LMMs在视频理解任务中的幻觉表现。该基准从三个维度构建,即幻觉成因、幻觉方面与问题形式,共包含6K个问题。随后,我们通过在所提基准上对16个LMMs进行实验,定量研究了视频时长、模型规模、模型推理等7个影响幻觉的关键因素。此外,受OpenAI o1等近期思维模型的启发,我们提出了一种视频思维模型,通过监督推理微调(SRFT)与思维直接偏好优化(TDPO)来缓解LMMs的幻觉——其中SRFT增强推理能力,而TDPO则减少思维过程中的幻觉。大量实验与分析验证了该方法的有效性。显著的是,该方法在幻觉评估准确率上比基线提升了7.65%,并将偏差分数降低了4.5%。代码与数据已公开于https://github.com/Hongcheng-Gao/HAVEN。