The transition of Large Language Models (LLMs) from passive knowledge retrievers to autonomous clinical agents demands a shift in evaluation-from static accuracy to dynamic behavioral reliability. To explore this boundary in dentistry, a domain where high-quality AI advice uniquely empowers patient-participatory decision-making, we present the Standardized Clinical Management & Performance Evaluation (SCMPE) benchmark, which comprehensively assesses performance from knowledge-oriented evaluations (static objective tasks) to workflow-based simulations (multi-turn simulated patient interactions). Our analysis reveals that while models demonstrate high proficiency in static objective tasks, their performance precipitates in dynamic clinical dialogues, identifying that the primary bottleneck lies not in knowledge retention, but in the critical challenges of active information gathering and dynamic state tracking. Mapping "Guideline Adherence" versus "Decision Quality" reveals a prevalent "High Efficacy, Low Safety" risk in general models. Furthermore, we quantify the impact of Retrieval-Augmented Generation (RAG). While RAG mitigates hallucinations in static tasks, its efficacy in dynamic workflows is limited and heterogeneous, sometimes causing degradation. This underscores that external knowledge alone cannot bridge the reasoning gap without domain-adaptive pre-training. This study empirically charts the capability boundaries of dental LLMs, providing a roadmap for bridging the gap between standardized knowledge and safe, autonomous clinical practice.
翻译:大语言模型(LLMs)从被动知识检索器向自主临床代理的转变,要求评估范式从静态准确性转向动态行为可靠性。为探索这一边界在牙科领域——一个高质量人工智能建议能独特赋能患者参与式决策的领域——我们提出了标准化临床管理与性能评估(SCMPE)基准。该基准全面评估了从知识导向评估(静态客观任务)到基于工作流程的模拟(多轮模拟患者交互)的性能。我们的分析表明,尽管模型在静态客观任务中表现出高熟练度,但其在动态临床对话中的性能显著下降,这表明主要瓶颈不在于知识保留,而在于主动信息收集和动态状态跟踪这两项关键挑战。绘制"指南遵循度"与"决策质量"的关系图揭示了通用模型中普遍存在的"高有效性,低安全性"风险。此外,我们量化了检索增强生成(RAG)的影响。虽然RAG在静态任务中减轻了幻觉问题,但其在动态工作流程中的效果有限且不均衡,有时甚至导致性能下降。这强调,若无领域自适应预训练,仅靠外部知识无法弥合推理鸿沟。本研究通过实证绘制了牙科大语言模型的能力边界,为弥合标准化知识与安全、自主临床实践之间的差距提供了路线图。