Large Language Models (LLMs) suffer from hallucinations and factual inaccuracies, especially in complex reasoning and fact verification tasks. Multi-Agent Debate (MAD) systems aim to improve answer accuracy by enabling multiple LLM agents to engage in dialogue, promoting diverse reasoning and mutual verification. However, existing MAD frameworks primarily rely on internal knowledge or static documents, making them vulnerable to hallucinations. While MADKE introduces external evidence to mitigate this, its one-time retrieval mechanism limits adaptability to new arguments or emerging information during the debate. To address these limitations, We propose Tool-MAD, a multi-agent debate framework that enhances factual verification by assigning each agent a distinct external tool, such as a search API or RAG module. Tool-MAD introduces three key innovations: (1) a multi-agent debate framework where agents leverage heterogeneous external tools, encouraging diverse perspectives, (2) an adaptive query formulation mechanism that iteratively refines evidence retrieval based on the flow of the debate, and (3) the integration of Faithfulness and Answer Relevance scores into the final decision process, allowing the Judge agent to quantitatively assess the coherence and question alignment of each response and effectively detect hallucinations. Experimental results on four fact verification benchmarks demonstrate that Tool-MAD consistently outperforms state-of-the-art MAD frameworks, achieving up to 5.5% accuracy improvement. Furthermore, in medically specialized domains, Tool-MAD exhibits strong robustness and adaptability across various tool configurations and domain conditions, confirming its potential for broader real-world fact-checking applications.
翻译:大型语言模型(LLM)存在幻觉和事实性错误问题,在复杂推理和事实核查任务中尤为突出。多智能体辩论(MAD)系统通过让多个LLM智能体进行对话,促进多样化推理和相互验证,旨在提高答案准确性。然而,现有的MAD框架主要依赖内部知识或静态文档,使其易受幻觉影响。尽管MADKE引入了外部证据来缓解此问题,但其一次性检索机制限制了在辩论过程中适应新论点或新信息的能力。为应对这些局限,我们提出了Tool-MAD,一种多智能体辩论框架,通过为每个智能体分配不同的外部工具(如搜索API或RAG模块)来增强事实核查。Tool-MAD引入了三项关键创新:(1)一个多智能体辩论框架,其中智能体利用异构的外部工具,鼓励多样化视角;(2)一种自适应查询生成机制,基于辩论进程迭代优化证据检索;(3)将忠实度与答案相关性分数整合至最终决策过程,使法官智能体能够定量评估每个响应的连贯性和问题对齐度,并有效检测幻觉。在四个事实核查基准测试上的实验结果表明,Tool-MAD始终优于最先进的MAD框架,实现了高达5.5%的准确率提升。此外,在医学专业领域中,Tool-MAD在各种工具配置和领域条件下展现出强大的鲁棒性和适应性,证实了其在更广泛的实际事实核查应用中的潜力。