The burgeoning interest in developing Large Language Models (LLMs) with up to trillion parameters has been met with concerns regarding resource efficiency and practical expense, particularly given the immense cost of experimentation. This scenario underscores the importance of exploring the potential of Small Language Models (SLMs) as a resource-efficient alternative. In this context, we introduce MiniCPM, specifically the 1.2B and 2.4B non-embedding parameter variants, not only excel in their respective categories but also demonstrate capabilities on par with 7B-13B LLMs. While focusing on SLMs, our approach exhibits scalability in both model and data dimensions for future LLM research. Regarding model scaling, we employ extensive model wind tunnel experiments for stable and optimal scaling. For data scaling, we introduce a Warmup-Stable-Decay (WSD) learning rate scheduler (LRS), conducive to continuous training and domain adaptation. We present an in-depth analysis of the intriguing training dynamics that occurred in the WSD LRS. With WSD LRS, we are now able to efficiently study data-model scaling law without extensive retraining experiments on both axes of model and data, from which we derive the much higher compute optimal data-model ratio than Chinchilla Optimal. Additionally, we introduce MiniCPM family, including MiniCPM-DPO, MiniCPM-MoE and MiniCPM-128K, whose excellent performance further cementing MiniCPM's foundation in diverse SLM applications. MiniCPM models are available publicly at https://github.com/OpenBMB/MiniCPM .
翻译:随着对参数规模高达万亿级别的大语言模型(LLMs)的研究兴趣日益增长,资源效率与实际开销问题也随之凸显,尤其体现在实验成本的高昂上。这一现状凸显了探索小语言模型(SLMs)作为资源高效替代方案的重要性。在此背景下,我们提出MiniCPM,具体包括1.2B与2.4B非嵌入参数版本,不仅在其所属类别中表现优异,更展现出媲美7B-13B参数大语言模型的能力。尽管聚焦于小语言模型,我们的方法在模型规模与数据规模两个维度上均具备未来大语言模型研究的可扩展性。针对模型扩展,我们采用广泛的模型风洞实验以实现稳定且最优的缩放。针对数据扩展,我们引入预热-稳定-衰减(WSD)学习率调度器(LRS),该方案有利于连续训练与领域自适应。我们深入分析了WSD学习率调度器中出现的有趣训练动态。借助WSD学习率调度器,我们得以在不需对模型与数据两个轴进行大量重训练实验的情况下,高效研究数据-模型缩放定律,并由此得出比Chinchilla最优方案高得多的计算最优数据-模型比率。此外,我们推出MiniCPM系列,包括MiniCPM-DPO、MiniCPM-MoE及MiniCPM-128K,其卓越性能进一步巩固了MiniCPM在多样化小语言模型应用中的基础地位。MiniCPM模型已公开发布于https://github.com/OpenBMB/MiniCPM 。