Ramaswamy et al. reported in \textit{Nature Medicine} that ChatGPT Health under-triages 51.6\% of emergencies, concluding that consumer-facing AI triage poses safety risks. However, their evaluation used an exam-style protocol -- forced A/B/C/D output, knowledge suppression, and suppression of clarifying questions -- that differs fundamentally from how consumers use health chatbots. We tested five frontier LLMs (GPT-5.2, Claude Sonnet 4.6, Claude Opus 4.6, Gemini 3 Flash, Gemini 3.1 Pro) on a 17-scenario partial replication bank under constrained (exam-style, 1,275 trials) and naturalistic (patient-style messages, 850 trials) conditions, with targeted ablations and prompt-faithful checks using the authors' released prompts. Naturalistic interaction improved triage accuracy by 6.4 percentage points ($p = 0.015$). Diabetic ketoacidosis was correctly triaged in 100\% of trials across all models and conditions. Asthma triage improved from 48\% to 80\%. The forced A/B/C/D format was the dominant failure mechanism: three models scored 0--24\% with forced choice but 100\% with free text (all $p < 10^{-8}$), consistently recommending emergency care in their own words while the forced-choice format registered under-triage. Prompt-faithful checks on the authors' exact released prompts confirmed the scaffold produces model-dependent, case-dependent results. The headline under-triage rate is highly contingent on evaluation format and should not be interpreted as a stable estimate of deployed triage behavior. Valid evaluation of consumer health AI requires testing under conditions that reflect actual use.
翻译:Ramaswamy等人在《自然医学》杂志中报道,ChatGPT Health对51.6%的急症存在分诊不足,并得出结论认为面向消费者的AI分诊存在安全风险。然而,他们的评估采用了考试式协议——强制A/B/C/D输出、知识抑制和澄清问题抑制——这与消费者使用健康聊天机器人的方式存在根本差异。我们在约束条件(考试式,1,275次试验)和自然条件(患者式消息,850次试验)下,使用作者发布的提示词对五个前沿大语言模型(GPT-5.2、Claude Sonnet 4.6、Claude Opus 4.6、Gemini 3 Flash、Gemini 3.1 Pro)进行了17个场景的部分复现测试,同时进行了针对性消融实验和提示词忠实度检验。自然交互使分诊准确率提高了6.4个百分点($p = 0.015$)。糖尿病酮症酸中毒在所有模型和条件下的试验中均实现100%正确分诊。哮喘分诊准确率从48%提升至80%。强制A/B/C/D格式是主要失败机制:三个模型在强制选择条件下得分仅为0-24%,但在自由文本条件下达到100%(所有$p < 10^{-8}$),这些模型始终用自主表述建议紧急护理,而强制选择格式却记录为分诊不足。对作者发布的确切提示词进行的忠实度检验证实,该评估框架会产生模型依赖性和案例依赖性的结果。标题中的分诊不足率高度依赖于评估格式,不应被解释为已部署分诊行为的稳定估计。消费者健康AI的有效评估需要在反映实际使用场景的条件下进行测试。