Fake news detection becomes particularly challenging in real-time scenarios, where emerging events often lack sufficient supporting evidence. Existing approaches often rely heavily on external evidence and therefore struggle to generalize under evidence scarcity. To address this issue, we propose Evaluation-Aware Selection of Experts (EASE), a novel framework for real-time fake news detection that dynamically adapts its decision-making process according to the assessed sufficiency of available evidence. EASE introduces a sequential evaluation mechanism comprising three independent perspectives: (1) Evidence-based evaluation, which assesses evidence and incorporates it into decision-making only when the evidence is sufficiently supportive; (2) Reasoning-based evaluation, which leverages the world knowledge of large language models (LLMs) and applies them only when their reliability is adequately established; and (3) Sentiment-based fallback, which integrates sentiment cues when neither evidence nor reasoning is reliable. To enhance the accuracy of evaluation processes, EASE employs instruction tuning with pseudo labels to guide each evaluator in justifying its perspective-specific knowledge through interpretable reasoning. Furthermore, the expert modules integrate the evaluators' justified assessments with the news content to enable evaluation-aware decision-making, thereby enhancing overall detection accuracy. Moreover, we introduce RealTimeNews-25, a new benchmark comprising recent news for evaluating model generalization on emerging news with limited evidence. Extensive experiments demonstrate that EASE not only achieves state-of-the-art performance across multiple benchmarks, but also significantly improves generalization to real-time news. The code and dataset are available: https://github.com/wgyhhhh/EASE.
翻译:实时场景下的虚假新闻检测尤为困难,因为新兴事件往往缺乏充分的支撑证据。现有方法通常严重依赖外部证据,因此在证据稀缺时难以泛化。为解决这一问题,我们提出评估感知的专家选择框架(EASE),这是一种用于实时虚假新闻检测的新型框架,能够根据对可用证据充分性的评估动态调整其决策过程。EASE引入了一种包含三个独立视角的顺序评估机制:(1)基于证据的评估,该视角评估证据并仅在证据具有足够支持性时才将其纳入决策;(2)基于推理的评估,该视角利用大语言模型(LLMs)的世界知识,并仅在其可靠性得到充分确认时才应用这些知识;(3)基于情感的回退机制,当证据和推理均不可靠时,该机制整合情感线索。为提高评估过程的准确性,EASE采用带有伪标签的指令微调来指导每个评估器通过可解释的推理来论证其特定视角的知识。此外,专家模块将评估器经过论证的评估与新闻内容相结合,以实现评估感知的决策,从而提升整体检测准确率。此外,我们引入了RealTimeNews-25这一新基准,它包含近期新闻,用于评估模型在证据有限的新兴新闻上的泛化能力。大量实验表明,EASE不仅在多个基准测试中取得了最先进的性能,而且显著提升了对实时新闻的泛化能力。代码与数据集已开源:https://github.com/wgyhhhh/EASE。