Large Language Models (LLMs) are increasingly used for clinical decision support, where hallucinations and unsafe suggestions may pose direct risks to patient safety. These risks are particularly challenging as they often manifest as subtle clinical errors that evade detection by generic metrics, while expert-authored fine-grained rubrics remain costly to construct and difficult to scale. In this paper, we propose a retrieval-augmented multi-agent framework designed to automate the generation of instance-specific evaluation rubrics. Our approach grounds evaluation in authoritative medical evidence by decomposing retrieved content into atomic facts and synthesizing them with user interaction constraints to form verifiable, fine-grained evaluation criteria. Evaluated on HealthBench, our framework achieves a Clinical Intent Alignment (CIA) score of 60.12%, a statistically significant improvement over the GPT-4o baseline (55.16%). In discriminative tests, our rubrics yield a mean score delta ($μ_Δ = 8.658$) and an AUROC of 0.977, nearly doubling the quality separation achieved by GPT-4o baseline (4.972). Beyond evaluation, our rubrics effectively guide response refinement, improving quality by 9.2% (from 59.0% to 68.2%). This provides a scalable and transparent foundation for both evaluating and improving medical LLMs. The code is available at https://anonymous.4open.science/r/Automated-Rubric-Generation-AF3C/.
翻译:大型语言模型(LLM)正日益广泛地应用于临床决策支持,其可能产生的幻觉与不安全建议会直接危及患者安全。此类风险尤其具有挑战性,因为它们常表现为细微的临床错误,难以被通用评估指标所捕获,而专家人工编写的细粒度评估量规则成本高昂且难以规模化。本文提出一种检索增强的多智能体框架,旨在自动化生成面向具体实例的评估量规。该方法通过将检索到的权威医学证据分解为原子事实,并结合用户交互约束进行综合,形成可验证的细粒度评估标准,从而将评估过程建立在可靠的医学证据基础之上。在HealthBench基准上的评估结果显示,本框架的临床意图对齐(CIA)得分达到60.12%,较GPT-4o基线(55.16%)取得统计显著提升。在判别性测试中,本方法生成的量规实现了平均得分差($μ_Δ = 8.658$)与AUROC值0.977,其质量区分度近乎达到GPT-4o基线(4.972)的两倍。除评估功能外,本量规能有效指导响应优化,将响应质量提升9.2%(从59.0%提高至68.2%)。这为医疗领域LLM的评估与改进提供了可扩展且透明的技术基础。相关代码已发布于 https://anonymous.4open.science/r/Automated-Rubric-Generation-AF3C/。