AI chatbots are increasingly used for health advice, but their performance in psychiatric triage remains undercharacterized. Psychiatric triage is particularly challenging because urgency must often be inferred from thoughts, behavior, and context rather than from objective findings. We evaluated the performance of 15 frontier AI chatbots on psychiatric triage from realistic single-message disclosures using 112 clinical vignettes, each paired with 1 of 4 original benchmark triage labels: A, routine; B, assessment within 1 week; C, assessment within 24 to 48 hours; and D, emergency care now. Vignettes covered 9 psychiatric presentation clusters and 9 focal risk dimensions, organized into 28 presentation-by-risk groups. Each group contributed 4 distinct vignettes, with 1 vignette at each triage level. Each vignette was rendered as a realistic human-authored conversational query, and the AI chatbots were tasked with assigning a triage label from that disclosure. Emergency under-triage occurred in 23 of 410 level D trials (5.6%), and all under-triaged emergencies were reassigned to level C urgency. Across target models, average accuracy ranged from 42.0% to 71.8%. Accuracy was highest for level D vignettes (94.3%) and lowest for level B vignettes (19.7%). Mean signed ordinal error was positive (+0.47 triage levels), indicating net over-triage. Dispersion was highest around the middle triage levels. All results were confirmed relative to clinician consensus labels from 50 medical doctors. When presented with user messages containing sufficient clinical information, frontier AI chatbots thus recognized psychiatric emergencies as requiring urgent medical assessment with near-zero error rates, yet showed marked over-triage for low and intermediate risk presentations.
翻译:人工智能聊天机器人正越来越多地用于提供健康建议,但其在精神科分诊中的表现仍缺乏充分描述。精神科分诊特别具有挑战性,因为紧迫性通常必须从思维、行为和背景中推断,而非依据客观发现。我们使用112个临床案例场景,评估了15款前沿AI聊天机器人基于单次真实消息披露进行精神科分诊的表现。每个案例场景配有4个原始基准分诊标签之一:A级(常规处理),B级(一周内评估),C级(24-48小时内评估),D级(立即急诊处理)。案例场景涵盖9个精神科表现群和9个聚焦风险维度,分为28个表现-风险组合。每个组合包含4个不同场景,每个分诊级别各对应1个场景。每个场景均以真实人类撰写的对话查询呈现,要求AI聊天机器人根据该披露信息分配分诊标签。在410次D级试验中,急诊漏诊发生23次(5.6%),所有漏诊急诊均被重新归类为C级紧迫性。各目标模型的平均准确率介于42.0%至71.8%之间。D级场景准确率最高(94.3%),B级场景准确率最低(19.7%)。平均有符号顺序误差为正值(+0.47个分诊级别),表明存在净过度分诊。中间分诊级别的离散度最高。所有结果均与50名临床医生共识标签的对比得到验证。因此,当用户消息包含足够临床信息时,前沿AI聊天机器人能够以接近零的错误率识别需要紧急医疗评估的精神科急诊,但对低风险及中等风险表现存在显著过度分诊。