Large-scale language models (LLMs) like ChatGPT have demonstrated impressive abilities in generating responses based on human instructions. However, their use in the medical field can be challenging due to their lack of specific, in-depth knowledge. In this study, we present a system called LLMs Augmented with Medical Textbooks (LLM-AMT) designed to enhance the proficiency of LLMs in specialized domains. LLM-AMT integrates authoritative medical textbooks into the LLMs' framework using plug-and-play modules. These modules include a Query Augmenter, a Hybrid Textbook Retriever, and a Knowledge Self-Refiner. Together, they incorporate authoritative medical knowledge. Additionally, an LLM Reader aids in contextual understanding. Our experimental results on three medical QA tasks demonstrate that LLMAMT significantly improves response quality, with accuracy gains ranging from 11.6% to 16.6%. Notably, with GPT-4-Turbo as the base model, LLM-AMT outperforms the specialized Med-PaLM 2 model pre-trained on a massive amount of medical corpus by 2-3%. We found that despite being 100x smaller in size, medical textbooks as a retrieval corpus is proven to be a more effective knowledge database than Wikipedia in the medical domain, boosting performance by 7.8%-13.7%.
翻译:大规模语言模型(如ChatGPT)在根据人类指令生成回答方面展现出令人瞩目的能力。然而,由于缺乏特定领域的深度知识,其在医学领域的应用面临挑战。本研究提出了一个名为“医学教材增强的大语言模型”(LLM-AMT)的系统,旨在提升大语言模型在专业领域的效能。LLM-AMT通过即插即用模块将权威医学教材集成至大语言模型框架中,这些模块包括查询增强器、混合教材检索器及知识自精炼器,三者协同实现权威医学知识的整合。此外,大语言模型阅读器有助于语境理解。我们在三个医学问答任务上的实验结果表明,LLM-AMT显著提升了回答质量,准确率提升幅度达11.6%至16.6%。值得注意的是,当以GPT-4-Turbo作为基础模型时,LLM-AMT的性能比专门在海量医学语料上预训练的Med-PaLM 2模型高出2-3%。我们发现,尽管医学教材的规模小100倍,但其作为检索语料库在医学领域被证明比维基百科更有效,性能提升达7.8%-13.7%。