The rapid progress in Large Language Models (LLMs) has prompted the creation of numerous benchmarks to evaluate their capabilities.This study focuses on the Comprehensive Medical Benchmark in Chinese (CMB), showcasing how dataset diversity and distribution in supervised fine-tuning (SFT) may enhance LLM performance.Remarkably, We successfully trained a smaller base model to achieve scores comparable to larger models, indicating that a diverse and well-distributed dataset can optimize performance regardless of model size.This study suggests that even smaller models may reach high performance levels with carefully curated and varied datasets. By integrating a wide range of instructional content, our approach addresses potential issues such as data quality inconsistencies. Our results imply that a broader spectrum of training data may enhance a model's ability to generalize and perform effectively across different medical scenarios, highlighting the importance of dataset quality and diversity in fine-tuning processes. We open-source the model for future research at https://github.com/CAS-SIAT-XinHai/CollectiveSFT
翻译:大型语言模型的快速发展促使了众多基准的创建,以评估其能力。本研究聚焦于中文综合医学基准,展示了监督微调中数据集的多样性和分布如何可能提升大型语言模型的性能。值得注意的是,我们成功训练了一个较小的基础模型,使其得分与更大模型相当,这表明无论模型规模如何,多样且分布良好的数据集都能优化性能。本研究表明,即使是较小的模型,通过精心策划且多样化的数据集,也可能达到较高的性能水平。通过整合广泛的指令内容,我们的方法解决了诸如数据质量不一致等潜在问题。我们的结果意味着,更广泛的训练数据谱系可能增强模型在不同医疗场景下的泛化能力和有效性能,从而凸显了微调过程中数据集质量和多样性的重要性。我们在 https://github.com/CAS-SIAT-XinHai/CollectiveSFT 开源了该模型以供未来研究。