Diagnostic errors in healthcare persist as a critical challenge, with increasing numbers of patients turning to online resources for health information. While AI-powered healthcare chatbots show promise, there exists no standardized and scalable framework for evaluating their diagnostic capabilities. This study introduces a scalable benchmarking methodology for assessing health AI systems and demonstrates its application through August, an AI-driven conversational chatbot. Our methodology employs 400 validated clinical vignettes across 14 medical specialties, using AI-powered patient actors to simulate realistic clinical interactions. In systematic testing, August achieved a top-one diagnostic accuracy of 81.8% (327/400 cases) and a top-two accuracy of 85.0% (340/400 cases), significantly outperforming traditional symptom checkers. The system demonstrated 95.8% accuracy in specialist referrals and required 47% fewer questions compared to conventional symptom checkers (mean 16 vs 29 questions), while maintaining empathetic dialogue throughout consultations. These findings demonstrate the potential of AI chatbots to enhance healthcare delivery, though implementation challenges remain regarding real-world validation and integration of objective clinical data. This research provides a reproducible framework for evaluating healthcare AI systems, contributing to the responsible development and deployment of AI in clinical settings.
翻译:医疗诊断错误仍然是一个关键挑战,越来越多的患者转向在线资源获取健康信息。尽管基于AI的医疗聊天机器人展现出潜力,但目前缺乏标准化且可扩展的评估其诊断能力的框架。本研究提出了一种可扩展的基准测试方法,用于评估健康AI系统,并通过AI驱动的对话聊天机器人August展示了其应用。我们的方法采用涵盖14个医学专科的400个经过验证的临床案例,利用AI驱动的患者模拟器来模拟真实的临床互动。在系统测试中,August在top-one诊断准确率上达到81.8%(327/400例),top-two准确率达到85.0%(340/400例),显著优于传统症状检查工具。该系统在专科转诊方面表现出95.8%的准确率,与传统症状检查工具相比问题数量减少47%(平均16个问题对比29个问题),同时在咨询过程中始终保持共情对话。这些发现证明了AI聊天机器人在提升医疗服务方面的潜力,尽管在实际验证和客观临床数据整合方面仍存在实施挑战。本研究为评估医疗AI系统提供了一个可复现的框架,有助于在临床环境中负责任地开发和部署AI。