Instruction-finetuning (IFT) has become crucial in aligning Large Language Models (LLMs) with diverse human needs and has shown great potential in medical applications. However, previous studies mainly fine-tune LLMs on biomedical datasets with limited diversity, which often rely on benchmarks or narrow task scopes, and hence significantly limit the effectiveness on their medical instruction-following ability and generalizability. To bridge this gap, we propose creating a diverse, machine-generated medical IFT dataset, MedInstruct-52k, using GPT-4 and ChatGPT with a high-quality expert-curated seed set. We then fine-tune LLaMA-series models on the dataset to develop AlpaCare. Despite using a smaller domain-specific dataset than previous medical LLMs, AlpaCare not only demonstrates superior performance on medical applications, with up to 38.1% absolute gain over best baselines in medical free-form instruction evaluations, but also achieves 6.7% absolute gains averaged over multiple general domain benchmarks. Human evaluation further shows that AlpaCare consistently outperforms best baselines in terms of both correctness and helpfulness. We offer public access to our data, model, and codebase in https://github.com/XZhang97666/AlpaCare.
翻译:指令微调已成为使大型语言模型与多样化人类需求对齐的关键技术,并在医疗应用中展现出巨大潜力。然而,先前研究主要在多样性受限的生物医学数据集上对大型语言模型进行微调,这些数据集通常依赖特定基准测试或狭窄的任务范围,从而严重限制了模型在医疗指令遵循能力和泛化性能方面的有效性。为弥补这一差距,我们提出利用GPT-4和ChatGPT,基于专家精心策划的高质量种子集,构建了多样化机器生成的医疗指令微调数据集MedInstruct-52k。随后,我们在该数据集上对LLaMA系列模型进行微调,从而开发出AlpaCare。尽管相较于先前医疗大型语言模型使用了更小规模的领域特定数据集,AlpaCare不仅在医疗应用中展现出卓越性能——在医疗自由形式指令评估中较最佳基线模型获得高达38.1%的绝对性能提升,还在多个通用领域基准测试中平均取得6.7%的绝对增益。人工评估进一步表明,AlpaCare在正确性和实用性方面均持续优于最佳基线模型。我们在https://github.com/XZhang97666/AlpaCare 公开提供了数据、模型及代码库。