While large language model (LLM)-based multi-agent systems show promise in simulating medical consultations, their evaluation is often confined to final-answer accuracy. This practice treats their internal collaborative processes as opaque "black boxes" and overlooks a critical question: is a diagnostic conclusion reached through a sound and verifiable reasoning pathway? The inscrutable nature of these systems poses a significant risk in high-stakes medical applications, potentially leading to flawed or untrustworthy conclusions. To address this, we conduct a large-scale empirical study of 3,600 cases from six medical datasets and six representative multi-agent frameworks. Through a rigorous, mixed-methods approach combining qualitative analysis with quantitative auditing, we develop a comprehensive taxonomy of collaborative failure modes. Our quantitative audit reveals four dominant failure patterns: flawed consensus driven by shared model deficiencies, suppression of correct minority opinions, ineffective discussion dynamics, and critical information loss during synthesis. This study demonstrates that high accuracy alone is an insufficient measure of clinical or public trust. It highlights the urgent need for transparent and auditable reasoning processes, a cornerstone for the responsible development and deployment of medical AI.
翻译:尽管基于大语言模型的多智能体系统在模拟医疗会诊方面展现出潜力,现有评估通常局限于最终答案的准确性。这种做法将其内部协作过程视为不透明的"黑箱",并忽略了一个关键问题:诊断结论是否通过合理且可验证的推理路径得出?这些系统难以洞察的特性在高风险医疗应用中构成重大风险,可能导致存在缺陷或不可信的结论。为此,我们对来自六个医疗数据集和六个代表性多智能体框架的3,600个案例进行了大规模实证研究。通过结合定性分析与定量审计的严谨混合方法,我们建立了协作失效模式的完整分类体系。定量审计揭示了四种主要失效模式:由共享模型缺陷驱动的错误共识、正确少数意见的压制、无效的讨论动态,以及综合过程中的关键信息丢失。本研究表明,仅凭高准确率不足以衡量临床或公众信任度。它凸显了对透明且可审计推理过程的迫切需求,这是医疗人工智能负责任开发与部署的基石。