Recent advances in multimodal large language models (MLLMs) mark a shift from non-thinking models to post-trained reasoning models capable of solving complex problems through thinking. However, whether such thinking mitigates hallucinations in multimodal perception and reasoning remains unclear. Self-reflective reasoning enhances robustness but introduces additional hallucinations, and subtle perceptual errors still result in incorrect or coincidentally correct answers. Existing benchmarks primarily focus on models before the emergence of reasoning MLLMs, neglecting the internal thinking process and failing to measure the hallucinations that occur during thinking. To address these challenges, we introduce MM-THEBench, a comprehensive benchmark for assessing hallucinations of intermediate CoTs in reasoning MLLMs. MM-THEBench features a fine-grained taxonomy grounded in cognitive dimensions, diverse data with verified reasoning annotations, and a multi-level automated evaluation framework. Extensive experiments on mainstream reasoning MLLMs reveal insights into how thinking affects hallucination and reasoning capability in various multimodal tasks.
翻译:近期多模态大语言模型(MLLMs)的发展标志着从无思考模型向具备通过思考解决复杂问题能力的后训练推理模型的转变。然而,此类思考过程是否能缓解多模态感知与推理中的幻觉现象仍不明确。自反式推理虽能提升鲁棒性,却会引入额外幻觉,且细微的感知误差仍会导致错误或偶然正确的答案。现有基准主要关注推理型MLLMs出现前的模型,忽视了内部思考过程,无法衡量思考过程中产生的幻觉。为应对这些挑战,我们提出了MM-THEBench——一个用于评估推理型MLLMs中间思维链幻觉的综合基准。该基准具备以下特征:基于认知维度的细粒度分类体系、包含已验证推理标注的多样化数据,以及多层次自动化评估框架。通过对主流推理型MLLMs的大量实验,我们揭示了思考过程如何影响各类多模态任务中的幻觉现象与推理能力。