Generative Large Language Models (LLMs) hold significant promise in healthcare, demonstrating capabilities such as passing medical licensing exams and providing clinical knowledge. However, their current use as information retrieval tools is limited by challenges like data staleness, resource demands, and occasional generation of incorrect information. This study assessed the potential of LLMs to function as autonomous agents in a simulated tertiary care medical center, using real-world clinical cases across multiple specialties. Both proprietary and open-source LLMs were evaluated, with Retrieval Augmented Generation (RAG) enhancing contextual relevance. Proprietary models, particularly GPT-4, generally outperformed open-source models, showing improved guideline adherence and more accurate responses with RAG. The manual evaluation by expert clinicians was crucial in validating models' outputs, underscoring the importance of human oversight in LLM operation. Further, the study emphasizes Natural Language Programming (NLP) as the appropriate paradigm for modifying model behavior, allowing for precise adjustments through tailored prompts and real-world interactions. This approach highlights the potential of LLMs to significantly enhance and supplement clinical decision-making, while also emphasizing the value of continuous expert involvement and the flexibility of NLP to ensure their reliability and effectiveness in healthcare settings.
翻译:生成式大语言模型在医疗健康领域展现出巨大潜力,已具备通过医学执照考试、提供临床知识等能力。然而,其当前作为信息检索工具的应用仍受限于数据时效性、资源需求及偶发错误信息生成等挑战。本研究通过多专科真实临床案例,在模拟三级医疗中心环境中评估了大语言模型作为自主智能体的应用潜力。研究同时评估了专有模型与开源模型,并采用检索增强生成技术提升上下文相关性。专有模型(尤其是GPT-4)整体表现优于开源模型,在检索增强生成辅助下展现出更佳的指南遵循度与回答准确性。临床专家的人工评估对验证模型输出至关重要,凸显了人类监督在大语言模型运行中的核心价值。此外,研究强调自然语言编程应作为修正模型行为的适切范式,通过定制化提示词与现实交互实现精准调控。该方法揭示了大语言模型在增强与辅助临床决策方面的巨大潜力,同时强调了持续专家参与的价值以及自然语言编程在确保医疗场景中可靠性与有效性方面的灵活性。