Several recent works seek to develop foundation models specifically for medical applications, adapting general-purpose large language models (LLMs) and vision-language models (VLMs) via continued pretraining on publicly available biomedical corpora. These works typically claim that such domain-adaptive pretraining (DAPT) improves performance on downstream medical tasks, such as answering medical licensing exam questions. In this paper, we compare seven public "medical" LLMs and two VLMs against their corresponding base models, arriving at a different conclusion: all medical VLMs and nearly all medical LLMs fail to consistently improve over their base models in the zero-/few-shot prompting regime for medical question-answering (QA) tasks. For instance, across the tasks and model pairs we consider in the 3-shot setting, medical LLMs only outperform their base models in 12.1% of cases, reach a (statistical) tie in 49.8% of cases, and are significantly worse than their base models in the remaining 38.2% of cases. Our conclusions are based on (i) comparing each medical model head-to-head, directly against the corresponding base model; (ii) optimizing the prompts for each model separately; and (iii) accounting for statistical uncertainty in comparisons. While these basic practices are not consistently adopted in the literature, our ablations show that they substantially impact conclusions. Our findings suggest that state-of-the-art general-domain models may already exhibit strong medical knowledge and reasoning capabilities, and offer recommendations to strengthen the conclusions of future studies.
翻译:近期多项研究致力于开发专门用于医学应用的基础模型,通过在公开可用的生物医学语料库上继续预训练来适应通用的大型语言模型(LLMs)和视觉语言模型(VLMs)。这些研究通常声称此类领域自适应预训练(DAPT)能够提升下游医学任务(如回答医学执照考试题目)的性能。本文中,我们比较了七个公开的“医学”LLMs和两个VLMs与其对应的基础模型,得出了不同的结论:在医学问答(QA)任务的零样本/少样本提示机制下,所有医学VLMs及几乎所有医学LLMs均未能持续超越其基础模型。例如,在我们考察的3样本设置下的任务和模型对中,医学LLMs仅在12.1%的情况下优于其基础模型,在49.8%的情况下达到(统计)平局,而在其余38.2%的情况下显著劣于基础模型。我们的结论基于:(i)将每个医学模型与其对应的基础模型进行直接头对头比较;(ii)为每个模型单独优化提示;(iii)在比较中考虑统计不确定性。尽管这些基本实践在现有文献中并未被一致采用,但我们的消融实验表明它们对结论有实质性影响。我们的研究结果表明,当前最先进的通用领域模型可能已具备较强的医学知识与推理能力,并提出了加强未来研究结论可靠性的建议。