Large language models (LLMs) are routinely used by physicians and patients for medical advice, yet their clinical safety profiles remain poorly characterized. We present NOHARM (Numerous Options Harm Assessment for Risk in Medicine), a benchmark using 100 real primary care-to-specialist consultation cases to measure frequency and severity of harm from LLM-generated medical recommendations. NOHARM covers 10 specialties, with 12,747 expert annotations for 4,249 clinical management options. Across 31 LLMs, potential for severe harm from LLM recommendations occurs in up to 22.2% (95% CI 21.6-22.8%) of cases, with harm of omission accounting for 76.6% (95% CI 76.4-76.8%) of errors. Safety performance is only moderately correlated (r = 0.61-0.64) with existing AI and medical knowledge benchmarks. The best models outperform generalist physicians on safety (mean difference 9.7%, 95% CI 7.0-12.5%), and a diverse multi-agent approach improves safety compared to solo models (mean difference 8.0%, 95% CI 4.0-12.1%). Therefore, despite strong performance on existing evaluations, widely used AI models can produce severely harmful medical advice at nontrivial rates, underscoring clinical safety as a distinct performance dimension necessitating explicit measurement.
翻译:大型语言模型(LLMs)已被医生和患者常规用于获取医疗建议,但其临床安全性特征仍未得到充分表征。本文提出NOHARM(医学风险多选项伤害评估基准),该基准利用100个真实初级保健至专科会诊案例,用于量化LLM生成医疗建议的伤害发生频率与严重程度。NOHARM涵盖10个医学专科,包含针对4,249项临床管理方案的12,747条专家标注。在31个LLMs的评估中,模型建议存在严重伤害风险的比例最高达22.2%(95%置信区间21.6-22.8%),其中遗漏性伤害占错误类型的76.6%(95%置信区间76.4-76.8%)。安全性表现与现有AI及医学知识基准仅呈中等相关(r = 0.61-0.64)。最优模型在安全性上优于全科医生(平均差异9.7%,95%置信区间7.0-12.5%),且多样化多智能体方法较单一模型显著提升安全性(平均差异8.0%,95%置信区间4.0-12.1%)。研究表明,尽管在现有评估中表现优异,广泛使用的AI模型仍可能以不可忽视的概率产生具有严重伤害性的医疗建议,这凸显了临床安全性作为独立性能维度亟需建立专项评估体系。