The rapid progress in Large Language Models (LLMs) has prompted the creation of numerous benchmarks to evaluate their capabilities.This study focuses on the Comprehensive Medical Benchmark in Chinese (CMB), showcasing how dataset diversity and distribution in supervised fine-tuning (SFT) may enhance LLM performance.Remarkably, We successfully trained a smaller base model to achieve scores comparable to larger models, indicating that a diverse and well-distributed dataset can optimize performance regardless of model size.This study suggests that even smaller models may reach high performance levels with carefully curated and varied datasets.By integrating a wide range of instructional content, our approach addresses potential issues such as data quality inconsistencies. Our results imply that a broader spectrum of training data may enhance a model's ability to generalize and perform effectively across different medical scenarios, highlighting the importance of dataset quality and diversity in fine-tuning processes.
翻译:大型语言模型的快速发展催生了众多评估其能力的基准测试。本研究聚焦于中文综合医学基准,展示了监督微调过程中数据集的多样性与分布如何提升大型语言模型性能。值得注意的是,我们成功训练了一个较小规模的基础模型,使其得分达到与更大模型相当的水平,这表明无论模型规模如何,多样化和分布良好的数据集都能优化性能。本研究表明,即使较小的模型也能通过精心策划的多样化数据集达到高性能水平。通过整合广泛的指令内容,我们的方法解决了数据质量不一致等潜在问题。研究结果暗示,更广泛的训练数据谱系可以增强模型在不同医疗场景中的泛化能力和执行效能,凸显了微调过程中数据集质量与多样性的重要意义。