Instruction-finetuning (IFT) has become crucial in aligning Large Language Models (LLMs) with diverse human needs and has shown great potential in medical applications. However, previous studies mainly fine-tune LLMs on biomedical datasets with limited diversity, which often rely on benchmarks or narrow task scopes, and hence significantly limit the effectiveness on their medical instruction-following ability and generalizability. To bridge this gap, we propose creating a diverse, machine-generated medical IFT dataset, MedInstruct-52k, using GPT-4 and ChatGPT with a high-quality expert-curated seed set. We then fine-tune LLaMA-series models on the dataset to develop AlpaCare. Despite using a smaller domain-specific dataset than previous medical LLMs, AlpaCare not only demonstrates superior performance on medical applications, with up to 38.1% absolute gain over best baselines in medical free-form instruction evaluations, but also achieves 6.7% absolute gains averaged over multiple general domain benchmarks. Human evaluation further shows that AlpaCare consistently outperforms best baselines in terms of both correctness and helpfulness. We offer public access to our data, model, and codebase in https://github.com/XZhang97666/AlpaCare.
翻译:指令微调已成为使大语言模型适应多样化人类需求的关键技术,并在医疗应用中展现出巨大潜力。然而,先前研究主要在生物医学数据集上进行有限多样性的微调,这些数据集通常依赖基准测试或狭窄的任务范围,从而严重限制了模型遵循医疗指令的能力与泛化性能。为弥补这一差距,我们提出利用GPT-4和ChatGPT,基于专家精心构建的高质量种子集,创建了包含5.2万条样本的多样化机器生成医疗指令数据集MedInstruct-52k。随后,我们在该数据集上对LLaMA系列模型进行微调,从而开发出AlpaCare。尽管使用的领域特定数据集规模小于以往的医疗大语言模型,AlpaCare不仅在医疗应用中表现出卓越性能——在医疗自由格式指令评估中相对最佳基线模型取得最高38.1%的绝对性能提升,还在多个通用领域基准测试中平均获得6.7%的绝对增益。人工评估进一步表明,AlpaCare在答案正确性与实用性方面均持续优于最佳基线模型。我们已在https://github.com/XZhang97666/AlpaCare 公开提供数据、模型及代码库。