Large language models (LLMs) are gaining increasing interests to improve clinical efficiency for medical diagnosis, owing to their unprecedented performance in modelling natural language. Ensuring the safe and reliable clinical applications, the evaluation of LLMs indeed becomes critical for better mitigating the potential risks, e.g., hallucinations. However, current evaluation methods heavily rely on labor-intensive human participation to achieve human-preferred judgements. To overcome this challenge, we propose an automatic evaluation paradigm tailored to assess the LLMs' capabilities in delivering clinical services, e.g., disease diagnosis and treatment. The evaluation paradigm contains three basic elements: metric, data, and algorithm. Specifically, inspired by professional clinical practice pathways, we formulate a LLM-specific clinical pathway (LCP) to define the clinical capabilities that a doctor agent should possess. Then, Standardized Patients (SPs) from the medical education are introduced as the guideline for collecting medical data for evaluation, which can well ensure the completeness of the evaluation procedure. Leveraging these steps, we develop a multi-agent framework to simulate the interactive environment between SPs and a doctor agent, which is equipped with a Retrieval-Augmented Evaluation (RAE) to determine whether the behaviors of a doctor agent are in accordance with LCP. The above paradigm can be extended to any similar clinical scenarios to automatically evaluate the LLMs' medical capabilities. Applying such paradigm, we construct an evaluation benchmark in the field of urology, including a LCP, a SPs dataset, and an automated RAE. Extensive experiments are conducted to demonstrate the effectiveness of the proposed approach, providing more insights for LLMs' safe and reliable deployments in clinical practice.
翻译:大语言模型(LLMs)在自然语言建模中展现出前所未有的性能,正日益受到关注以提升医疗诊断的临床效率。为确保安全可靠的临床应用,对LLMs的评估对于降低潜在风险(如幻觉)至关重要。然而,当前评估方法严重依赖劳动密集型的人工参与以实现符合人类偏好的判断。为解决这一难题,我们提出一种面向LLMs临床服务能力(如疾病诊断与治疗)的自动评估范式。该评估范式包含三个基本要素:度量、数据与算法。具体而言,受专业临床实践路径启发,我们构建了LLM专用临床路径(LCP)以定义医生智能体应具备的临床能力。继而引入医学教育中的标准化患者(SPs)作为收集评估用医疗数据的指南,从而充分保障评估流程的完整性。基于上述步骤,我们开发了多智能体框架以模拟SPs与医生智能体间的交互环境,该框架配备检索增强评估(RAE),用于判定医生智能体的行为是否遵循LCP。上述范式可扩展至任意类似的临床场景,实现LLMs医疗能力的自动评估。应用该范式,我们在泌尿外科领域构建了评估基准,包含LCP、SPs数据集及自动化RAE。通过大量实验验证了所提方法的有效性,为LLMs在临床实践中的安全可靠部署提供了重要见解。