The burgeoning interest in developing Large Language Models (LLMs) with up to trillion parameters has been met with concerns regarding resource efficiency and practical expense, particularly given the immense cost of experimentation. This scenario underscores the importance of exploring the potential of Small Language Models (SLMs) as a resource-efficient alternative. In this context, we introduce MiniCPM, specifically the 1.2B and 2.4B non-embedding parameter variants, not only excel in their respective categories but also demonstrate capabilities on par with 7B-13B LLMs. While focusing on SLMs, our approach exhibits scalability in both model and data dimensions for future LLM research. Regarding model scaling, we employ extensive model wind tunnel experiments for stable and optimal scaling. For data scaling, we introduce a Warmup-Stable-Decay (WSD) learning rate scheduler (LRS), conducive to continuous training and domain adaptation. We present an in-depth analysis of the intriguing training dynamics that occurred in the WSD LRS. With WSD LRS, we are now able to efficiently study data-model scaling law without extensive retraining experiments on both axes of model and data, from which we derive the much higher compute optimal data-model ratio than Chinchilla Optimal. Additionally, we introduce MiniCPM family, including MiniCPM-DPO, MiniCPM-MoE and MiniCPM-128K, whose excellent performance further cementing MiniCPM's foundation in diverse SLM applications. MiniCPM models are available publicly at https://github.com/OpenBMB/MiniCPM .
翻译:随着开发参数规模高达万亿级别的大语言模型(LLMs)的兴趣日益增长,资源效率与实际成本问题(尤其是实验的巨大开销)引发了广泛关注。这一现状凸显了探索小语言模型(SLMs)作为资源高效替代方案的重要性。在此背景下,我们提出MiniCPM,其1.2B和2.4B非嵌入参数规模变体不仅在各自类别中表现优异,更展现出与7B-13B级别LLMs相匹敌的能力。尽管聚焦于SLMs,我们的方法在模型与数据两个维度上均展现出面向未来LLM研究的可扩展性。在模型扩展方面,我们通过大规模模型风洞实验实现稳定且最优的扩展。在数据扩展方面,我们提出Warmup-Stable-Decay(WSD)学习率调度器(LRS),该方案有利于持续训练与领域自适应。我们深度分析了WSD LRS中出现的引人入胜的训练动态。借助WSD LRS,我们得以在不需对模型与数据两个维度进行大量重训练实验的前提下,高效研究数据-模型扩展规律,并由此推导出远高于Chinchilla最优值的最优计算数据-模型比率。此外,我们推出MiniCPM系列模型,包括MiniCPM-DPO、MiniCPM-MoE以及MiniCPM-128K,其卓越性能进一步巩固了MiniCPM在多样化SLM应用中的基础地位。MiniCPM模型已在https://github.com/OpenBMB/MiniCPM 公开提供。