Recent advancements in large language models (LLMs) have given rise to the LLM-as-a-judge paradigm, showcasing their potential to deliver human-like judgments. However, in the field of machine translation (MT) evaluation, current LLM-as-a-judge methods fall short of learned automatic metrics. In this paper, we propose Multidimensional Multi-Agent Debate (M-MAD), a systematic LLM-based multi-agent framework for advanced LLM-as-a-judge MT evaluation. Our findings demonstrate that M-MAD achieves significant advancements by (1) decoupling heuristic MQM criteria into distinct evaluation dimensions for fine-grained assessments; (2) employing multi-agent debates to harness the collaborative reasoning capabilities of LLMs; (3) synthesizing dimension-specific results into a final evaluation judgment to ensure robust and reliable outcomes. Comprehensive experiments show that M-MAD not only outperforms all existing LLM-as-a-judge methods but also competes with state-of-the-art reference-based automatic metrics, even when powered by a suboptimal model like GPT-4o mini. Detailed ablations and analysis highlight the superiority of our framework design, offering a fresh perspective for LLM-as-a-judge paradigm. Our code and data are publicly available at https://github.com/SU-JIAYUAN/M-MAD.
翻译:近期大语言模型(LLM)的进展催生了“LLM即评判者”范式,展现了其提供类人评判的潜力。然而,在机器翻译(MT)评估领域,当前基于LLM的评判方法仍落后于学习型自动评估指标。本文提出多维多智能体辩论(M-MAD),一种基于LLM的系统化多智能体框架,用于实现先进的“LLM即评判者”机器翻译评估。我们的研究表明,M-MAD通过以下方式实现了显著进步:(1)将启发式MQM准则解耦为独立的评估维度以实现细粒度评估;(2)采用多智能体辩论以利用LLM的协同推理能力;(3)综合各维度特定结果形成最终评估判断,以确保稳健可靠的结果。综合实验表明,M-MAD不仅超越了所有现有的LLM评判方法,而且与最先进的基于参考的自动评估指标相媲美,即使在采用GPT-4o mini等次优模型驱动时亦然。详细的消融实验与分析凸显了我们框架设计的优越性,为“LLM即评判者”范式提供了新的视角。我们的代码与数据已公开于https://github.com/SU-JIAYUAN/M-MAD。