Knowledge distillation (KD) is an effective model compression method that can transfer the internal capabilities of large language models (LLMs) to smaller ones. However, the multi-modal probability distribution predicted by teacher LLMs causes difficulties for student models to learn. In this paper, we first demonstrate the importance of multi-modal distribution alignment with experiments and then highlight the inefficiency of existing KD approaches in learning multi-modal distributions. To address this problem, we propose Ranking Loss based Knowledge Distillation (RLKD), which encourages the consistency of the ranking of peak predictions between the teacher and student models. By incorporating word-level ranking loss, we ensure excellent compatibility with existing distillation objectives while fully leveraging the fine-grained information between different categories in peaks of two predicted distribution. Experimental results demonstrate that our method enables the student model to better learn the multi-modal distributions of the teacher model, leading to a significant performance improvement in various downstream tasks.
翻译:知识蒸馏(KD)是一种有效的模型压缩方法,能够将大型语言模型(LLMs)的内部能力迁移至较小模型。然而,教师LLM预测的多模态概率分布导致学生模型学习困难。本文首先通过实验论证了多模态分布对齐的重要性,进而指出现有KD方法在学习多模态分布时的低效性。针对该问题,我们提出基于排序损失的知识蒸馏方法(RLKD),该方法通过促进教师模型与学生模型在峰值预测排序上的一致性,结合词级排序损失,在保持与现有蒸馏目标良好兼容性的同时,充分利用两个预测分布峰值中不同类别间的细粒度信息。实验结果表明,我们的方法使学生模型能更好地学习教师模型的多模态分布,从而在多种下游任务中实现显著的性能提升。