Large language models (LLMs) are routinely used by physicians and patients for medical advice, yet their clinical safety profiles remain poorly characterized. We present NOHARM (Numerous Options Harm Assessment for Risk in Medicine), a benchmark using 100 real primary-care-to-specialist consultation cases to measure harm frequency and severity from LLM-generated medical recommendations. NOHARM covers 10 specialties, with 12,747 expert annotations for 4,249 clinical management options. Across 31 LLMs, severe harm occurs in up to 22.2% (95% CI 21.6-22.8%) of cases, with harms of omission accounting for 76.6% (95% CI 76.4-76.8%) of errors. Safety performance is only moderately correlated (r = 0.61-0.64) with existing AI and medical knowledge benchmarks. The best models outperform generalist physicians on safety (mean difference 9.7%, 95% CI 7.0-12.5%), and a diverse multi-agent approach reduces harm compared to solo models (mean difference 8.0%, 95% CI 4.0-12.1%). Therefore, despite strong performance on existing evaluations, widely used AI models can produce severely harmful medical advice at nontrivial rates, underscoring clinical safety as a distinct performance dimension necessitating explicit measurement.
翻译:大型语言模型(LLMs)已被医生和患者常规用于获取医疗建议,但其临床安全性特征仍缺乏充分表征。我们提出了NOHARM(医疗风险多选项伤害评估基准),该基准利用100个真实初级诊疗至专科会诊病例,用于量化LLM生成医疗建议的伤害发生频率与严重程度。NOHARM涵盖10个医学专科,包含12,747项专家对4,249个临床管理方案的标注。在31个LLMs的测试中,严重伤害发生率最高达22.2%(95% CI 21.6-22.8%),其中遗漏性伤害占错误类型的76.6%(95% CI 76.4-76.8%)。安全性表现与现有AI及医学知识基准仅呈中等相关(r = 0.61-0.64)。最优模型在安全性上优于全科医生(平均差异9.7%,95% CI 7.0-12.5%),且多样化多智能体方法较单模型能降低伤害发生率(平均差异8.0%,95% CI 4.0-12.1%)。因此,尽管现有评估显示优异性能,广泛使用的AI模型仍可能以不可忽视的概率产生严重有害的医疗建议,这凸显了临床安全性作为独立性能维度亟需专门量化评估的重要性。