Knowledge distillation (KD) is widely used to train small, high-performing student language models (LMs) using large teacher LMs. While effective in fine-tuning, KD during pre-training faces challenges in efficiency, flexibility, and effectiveness. Existing methods either incur high computational costs due to online teacher inference, require tokenization matching between teacher and student LMs, or risk losing the difficulty and diversity of the teacher-generated training data. To address these issues, we propose MiniPLM, a KD framework for pre-training LMs by refining the training data distribution with the teacher's knowledge. For efficiency, MiniPLM performs offline teacher LM inference, allowing KD for multiple student LMs without adding training-time costs. For flexibility, MiniPLM operates solely on the training corpus, enabling KD across model families. For effectiveness, MiniPLM leverages the differences between large and small LMs to enhance the difficulty and diversity of the training data, helping student LMs acquire versatile and sophisticated knowledge. Extensive experiments demonstrate that MiniPLM boosts the student LMs' performance on 9 widely used downstream tasks, improves the language modeling capabilities, and reduces pre-training computation. The benefit of MiniPLM extends to large pre-training scales, evidenced by the extrapolation of the scaling curves. Further analysis reveals that MiniPLM supports KD across model families and enhances the utilization of pre-training data. Our model, code, and data are available at https://github.com/thu-coai/MiniPLM.
翻译:知识蒸馏(KD)被广泛用于利用大型教师语言模型(LM)训练小型高性能学生语言模型。尽管在微调阶段效果显著,但在预训练阶段应用KD面临着效率、灵活性和有效性方面的挑战。现有方法或因在线教师模型推理而产生高昂计算成本,或要求师生模型间分词方式匹配,亦或存在丢失教师生成训练数据难度与多样性的风险。为解决这些问题,我们提出了MiniPLM,一个通过教师知识精炼训练数据分布来实现预训练语言模型知识蒸馏的框架。在效率方面,MiniPLM执行离线教师模型推理,可在不增加训练时间成本的情况下为多个学生模型进行知识蒸馏。在灵活性方面,MiniPLM仅基于训练语料库操作,支持跨模型系列的知识蒸馏。在有效性方面,MiniPLM利用大小语言模型之间的差异来增强训练数据的难度和多样性,帮助学生模型获取全面而复杂的知识。大量实验表明,MiniPLM在9个广泛使用的下游任务上提升了学生模型的性能,增强了语言建模能力,并减少了预训练计算量。MiniPLM的益处可延伸至大规模预训练场景,其缩放曲线的外推结果证实了这一点。进一步分析表明,MiniPLM支持跨模型系列的知识蒸馏,并提升了预训练数据的利用率。我们的模型、代码和数据公开于 https://github.com/thu-coai/MiniPLM。