Short-video platforms have become major channels for misinformation, where deceptive claims frequently leverage visual experiments and social cues. While Multimodal Large Language Models (MLLMs) have demonstrated impressive reasoning capabilities, their robustness against misinformation entangled with cognitive biases remains under-explored. In this paper, we introduce a comprehensive evaluation framework using a high-quality, manually annotated dataset of 200 short videos spanning four health domains. This dataset provides fine-grained annotations for three deceptive patterns, experimental errors, logical fallacies, and fabricated claims, each verified by evidence such as national standards and academic literature. We evaluate eight frontier MLLMs across five modality settings. Experimental results demonstrate that Gemini-2.5-Pro achieves the highest performance in the multimodal setting with a belief score of 71.5/100, while o3 performs the worst at 35.2. Furthermore, we investigate social cues that induce false beliefs in videos and find that models are susceptible to biases like authoritative channel IDs.
翻译:短视频平台已成为虚假信息传播的主要渠道,其中欺骗性主张常利用视觉实验和社交线索。尽管多模态大语言模型(MLLMs)已展现出卓越的推理能力,但其对与认知偏见交织的虚假信息的鲁棒性仍未得到充分探索。本文引入一个综合评估框架,使用一个涵盖四个健康领域、包含200个短视频的高质量人工标注数据集。该数据集为三种欺骗模式——实验错误、逻辑谬误和捏造主张——提供细粒度标注,每种模式均通过国家标准和学术文献等证据进行验证。我们在五种模态设置下评估了八个前沿MLLMs。实验结果表明,Gemini-2.5-Pro在多模态设置中表现最佳,信念得分为71.5/100,而o3表现最差,仅为35.2。此外,我们研究了视频中诱发错误信念的社交线索,发现模型易受权威频道ID等偏见影响。