The rapid proliferation of online misinformation threatens the stability of digital social systems and poses significant risks to public trust, policy, and safety, necessitating reliable automated fake news detection. Existing methods often struggle with multimodal content, domain generalization, and explainability. We propose AMPEND-LS, an agentic multi-persona evidence-grounded framework with LLM-SLM synergy for multimodal fake news detection. AMPEND-LS integrates textual, visual, and contextual signals through a structured reasoning pipeline powered by LLMs, augmented with reverse image search, knowledge graph paths, and persuasion strategy analysis. To improve reliability, we introduce a credibility fusion mechanism combining semantic similarity, domain trustworthiness, and temporal context, and a complementary SLM classifier to mitigate LLM uncertainty and hallucinations. Extensive experiments across three benchmark datasets demonstrate that AMPEND-LS consistently outperformed state-of-the-art baselines in accuracy, F1 score, and robustness. Qualitative case studies further highlight its transparent reasoning and resilience against evolving misinformation. This work advances the development of adaptive, explainable, and evidence-aware systems for safeguarding online information integrity.
翻译:在线虚假信息的快速扩散威胁着数字社会系统的稳定性,并对公众信任、政策制定及安全构成重大风险,因此亟需可靠的自动化假新闻检测方法。现有方法通常在处理多模态内容、领域泛化及可解释性方面存在不足。我们提出了AMPEND-LS,一种基于代理式多角色证据支撑、融合大语言模型与小语言模型协同的多模态假新闻检测框架。AMPEND-LS通过大语言模型驱动的结构化推理流程,整合文本、视觉及上下文信号,并辅以反向图像搜索、知识图谱路径及说服策略分析。为提升可靠性,我们引入了融合语义相似性、领域可信度及时间上下文的可信度融合机制,并采用互补的小语言模型分类器以缓解大语言模型的不确定性与幻觉问题。在三个基准数据集上的大量实验表明,AMPEND-LS在准确率、F1分数及鲁棒性方面均持续优于现有先进基线方法。定性案例研究进一步突显了其透明的推理过程及对不断演变的虚假信息的抵御能力。本研究推动了面向在线信息完整性保护的、具备适应性、可解释性及证据感知能力的系统发展。