Large language model (LLM)-based multi-agent systems are challenging to debug because failures often arise from long, branching interaction traces. The prevailing practice is to leverage LLMs for log-based failure localization, attributing errors to a specific agent and step. However, this paradigm has two key limitations: (i) log-only debugging lacks validation, producing untested hypotheses, and (ii) single-step or single-agent attribution is often ill-posed, as we find that multiple distinct interventions can independently repair the failed task. To address the first limitation, we introduce DoVer, an intervention-driven debugging framework, which augments hypothesis generation with active verification through targeted interventions (e.g., editing messages, altering plans). For the second limitation, rather than evaluating on attribution accuracy, we focus on measuring whether the system resolves the failure or makes quantifiable progress toward task success, reflecting a more outcome-oriented view of debugging. Within the Magnetic-One agent framework, on the datasets derived from GAIA and AssistantBench, DoVer flips 18-28% of failed trials into successes, achieves up to 16% milestone progress, and validates or refutes 30-60% of failure hypotheses. DoVer also performs effectively on a different dataset (GSMPlus) and agent framework (AG2), where it recovers 49% of failed trials. These results highlight intervention as a practical mechanism for improving reliability in agentic systems and open opportunities for more robust, scalable debugging methods for LLM-based multi-agent systems. Project website and code will be available at https://aka.ms/DoVer.
翻译:基于大语言模型(LLM)的多智能体系统调试面临挑战,因为故障往往源于冗长且分支繁多的交互轨迹。当前主流方法利用LLM进行基于日志的故障定位,将错误归因于特定智能体及步骤。然而,该范式存在两个关键局限:(一)纯日志调试缺乏验证机制,仅产生未经检验的假设;(二)单步骤或单智能体归因常定义不当,因为我们发现多种不同的干预措施均可独立修复失败任务。针对第一个局限,我们提出DoVer——一种干预驱动的调试框架,其通过定向干预(如编辑消息、调整计划)进行主动验证,从而增强假设生成过程。对于第二个局限,我们不再评估归因准确性,转而关注系统是否解决了故障或在任务成功方向上取得可量化的进展,这体现了更注重结果的调试视角。在Magnetic-One智能体框架中,基于GAIA与AssistantBench衍生的数据集,DoVer将18-28%的失败试验转化为成功,实现最高16%的里程碑进度,并验证或反驳了30-60%的故障假设。DoVer在另一数据集(GSMPlus)和智能体框架(AG2)上同样表现优异,成功恢复了49%的失败试验。这些结果表明干预是提升智能体系统可靠性的有效机制,并为基于LLM的多智能体系统开发更鲁棒、可扩展的调试方法开辟了新途径。项目网站与代码将通过 https://aka.ms/DoVer 发布。