In the expanding field of language model applications, medical knowledge representation remains a significant challenge due to the specialized nature of the domain. Large language models, such as GPT-4, obtain reasonable scores on medical question answering tasks, but smaller models are far behind. In this work, we introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach. We first fine-tune the model on a corpus of medical textbooks. Then, we use GPT-4 to generate questions similar to the downstream task, prompted with textbook knowledge, and use them to fine-tune the model. Additionally, we introduce ECN-QA, a novel medical question answering dataset containing ``progressive questions'' composed of related sequential questions. We show the benefits of our training strategy on this dataset. The study's findings highlight the potential of small language models in the medical domain when appropriately fine-tuned. The code and weights are available at https://github.com/raidium-med/MQG.
翻译:在语言模型应用不断扩展的领域中,由于医学领域的专业性,医学知识表示仍然是一个重大挑战。大型语言模型(如GPT-4)在医学问答任务上能获得合理的分数,但小型模型则远远落后。在本工作中,我们引入了一种双重方法来提升小型语言模型在医学领域的熟练度。我们首先在医学教科书语料库上对模型进行微调。然后,我们利用GPT-4,在教科书知识的提示下,生成与下游任务相似的问题,并用这些问题对模型进行微调。此外,我们引入了ECN-QA,这是一个新颖的医学问答数据集,其中包含由相关序列问题组成的“渐进式问题”。我们展示了在此数据集上我们训练策略的优势。研究结果凸显了经过适当微调后,小型语言模型在医学领域的潜力。代码与权重可在 https://github.com/raidium-med/MQG 获取。