The rapid spread of rumors on social media platforms during breaking events severely hinders the dissemination of the truth. Previous studies reveal that the lack of annotated resources hinders the direct detection of unforeseen breaking events not covered in yesterday's news. Leveraging large language models (LLMs) for rumor detection holds significant promise. However, it is challenging for LLMs to provide comprehensive responses to complex or controversial issues due to limited diversity. In this work, we propose the Stance Separated Multi-Agent Debate (S2MAD) to address this issue. Specifically, we firstly introduce Stance Separation, categorizing comments as either supporting or opposing the original claim. Subsequently, claims are classified as subjective or objective, enabling agents to generate reasonable initial viewpoints with different prompt strategies for each type of claim. Debaters then follow specific instructions through multiple rounds of debate to reach a consensus. If a consensus is not reached, a judge agent evaluates the opinions and delivers a final verdict on the claim's veracity. Extensive experiments conducted on two real-world datasets demonstrate that our proposed model outperforms state-of-the-art methods in terms of performance and effectively improves the performance of LLMs in breaking event rumor detection.
翻译:突发事件期间社交媒体平台上谣言的迅速传播严重阻碍了真相的传播。先前研究表明,标注资源的缺乏阻碍了对昨日新闻未涵盖的突发事件的直接检测。利用大语言模型进行谣言检测具有重要前景。然而,由于多样性有限,大语言模型难以对复杂或有争议的问题提供全面的回应。在本工作中,我们提出立场分离的多智能体辩论方法以解决此问题。具体而言,我们首先引入立场分离,将评论分类为支持或反对原始主张。随后,将主张分类为主观或客观,使智能体能够针对每种类型的主张采用不同的提示策略生成合理的初始观点。辩论者随后遵循特定指令进行多轮辩论以达成共识。若未能达成共识,则由一个裁判智能体评估各方意见并对主张的真实性作出最终裁决。在两个真实世界数据集上进行的大量实验表明,我们提出的模型在性能上优于现有最先进方法,并有效提升了大语言模型在突发事件谣言检测中的表现。