Large-scale language models (LLMs) like ChatGPT have demonstrated impressive abilities in generating responses based on human instructions. However, their use in the medical field can be challenging due to their lack of specific, in-depth knowledge. In this study, we present a system called LLMs Augmented with Medical Textbooks (LLM-AMT) designed to enhance the proficiency of LLMs in specialized domains. LLM-AMT integrates authoritative medical textbooks into the LLMs' framework using plug-and-play modules. These modules include a Query Augmenter, a Hybrid Textbook Retriever, and a Knowledge Self-Refiner. Together, they incorporate authoritative medical knowledge. Additionally, an LLM Reader aids in contextual understanding. Our experimental results on three medical QA tasks demonstrate that LLMAMT significantly improves response quality, with accuracy gains ranging from 11.6% to 16.6%. Notably, with GPT-4-Turbo as the base model, LLM-AMT outperforms the specialized Med-PaLM 2 model pre-trained on a massive amount of medical corpus by 2-3%. We found that despite being 100x smaller in size, medical textbooks as a retrieval corpus is proven to be a more effective knowledge database than Wikipedia in the medical domain, boosting performance by 7.8%-13.7%.
翻译:以ChatGPT为代表的大规模语言模型(LLMs)已展现出依据人类指令生成回复的卓越能力。然而,由于缺乏专业深入的领域知识,其在医学领域的应用仍面临挑战。本研究提出一种名为"基于医学教科书增强的大语言模型"(LLM-AMT)的系统,旨在提升大语言模型在专业领域的应用能力。该系统通过即插即用模块将权威医学教科书整合至大语言模型框架,包括查询增强器、混合教科书检索器和知识自优化器,共同实现权威医学知识的融合。此外,大语言模型阅读器辅助进行上下文理解。我们在三项医学问答任务上的实验结果表明,LLM-AMT显著提升了回答质量,准确率提升幅度达11.6%至16.6%。值得注意的是,以GPT-4-Turbo为基础模型时,LLM-AMT的表现优于经过海量医学语料预训练的专业模型Med-PaLM 2,领先幅度达2-3%。研究发现,尽管医学教科书规模仅为维基百科的1/100,但其作为检索语料库在医学领域被证明是更有效的知识数据库,可带来7.8%-13.7%的性能提升。