Large language models (LLMs) are routinely used by physicians and patients for medical advice, yet their clinical safety profiles remain poorly characterized. We present NOHARM (Numerous Options Harm Assessment for Risk in Medicine), a benchmark using 100 real primary care-to-specialist consultation cases to measure frequency and severity of harm from LLM-generated medical recommendations. NOHARM covers 10 specialties, with 12,747 expert annotations for 4,249 clinical management options. Across 31 LLMs, potential for severe harm from LLM recommendations occurs in up to 22.2% (95% CI 21.6-22.8%) of cases, with harm of omission accounting for 76.6% (95% CI 76.4-76.8%) of errors. Safety performance is only moderately correlated (r = 0.61-0.64) with existing AI and medical knowledge benchmarks. The best models outperform generalist physicians on safety (mean difference 9.7%, 95% CI 7.0-12.5%), and a diverse multi-agent approach improves safety compared to solo models (mean difference 8.0%, 95% CI 4.0-12.1%). Therefore, despite strong performance on existing evaluations, widely used AI models can produce severely harmful medical advice at nontrivial rates, underscoring clinical safety as a distinct performance dimension necessitating explicit measurement.


翻译:大型语言模型(LLMs)已被医生和患者常规用于获取医疗建议,但其临床安全性特征仍未得到充分表征。本文提出NOHARM(医学风险多选项伤害评估基准),该基准利用100个真实初级保健至专科会诊案例,用于量化LLM生成医疗建议的伤害发生频率与严重程度。NOHARM涵盖10个医学专科,包含针对4,249项临床管理方案的12,747条专家标注。在31个LLMs的评估中,模型建议存在严重伤害风险的比例最高达22.2%(95%置信区间21.6-22.8%),其中遗漏性伤害占错误类型的76.6%(95%置信区间76.4-76.8%)。安全性表现与现有AI及医学知识基准仅呈中等相关(r = 0.61-0.64)。最优模型在安全性上优于全科医生(平均差异9.7%,95%置信区间7.0-12.5%),且多样化多智能体方法较单一模型显著提升安全性(平均差异8.0%,95%置信区间4.0-12.1%)。研究表明,尽管在现有评估中表现优异,广泛使用的AI模型仍可能以不可忽视的概率产生具有严重伤害性的医疗建议,这凸显了临床安全性作为独立性能维度亟需建立专项评估体系。

0
下载
关闭预览

相关内容

ACM/IEEE第23届模型驱动工程语言和系统国际会议,是模型驱动软件和系统工程的首要会议系列,由ACM-SIGSOFT和IEEE-TCSE支持组织。自1998年以来,模型涵盖了建模的各个方面,从语言和方法到工具和应用程序。模特的参加者来自不同的背景,包括研究人员、学者、工程师和工业专业人士。MODELS 2019是一个论坛,参与者可以围绕建模和模型驱动的软件和系统交流前沿研究成果和创新实践经验。今年的版本将为建模社区提供进一步推进建模基础的机会,并在网络物理系统、嵌入式系统、社会技术系统、云计算、大数据、机器学习、安全、开源等新兴领域提出建模的创新应用以及可持续性。 官网链接:http://www.modelsconference.org/
Top
微信扫码咨询专知VIP会员