Large Language Models (LLMs) have exhibited impressive capabilities in various tasks, yet their vast parameter sizes restrict their applicability in resource-constrained settings. Knowledge distillation (KD) offers a viable solution by transferring expertise from large teacher models to compact student models. However, traditional KD techniques face specific challenges when applied to LLMs, including restricted access to LLM outputs, significant teacher-student capacity gaps, and the inherited mis-calibration issue. In this work, we present PLaD, a novel preference-based LLM distillation framework. PLaD exploits the teacher-student capacity discrepancy to generate pseudo-preference pairs where teacher outputs are preferred over student outputs. Then, PLaD leverages a ranking loss to re-calibrate student's estimation of sequence likelihood, which steers the student's focus towards understanding the relative quality of outputs instead of simply imitating the teacher. PLaD bypasses the need for access to teacher LLM's internal states, tackles the student's expressivity limitations, and mitigates the student mis-calibration issue. Through extensive experiments on two sequence generation tasks and with various LLMs, we demonstrate the effectiveness of our proposed PLaD framework.
翻译:大语言模型(LLMs)在各种任务中展现出卓越的能力,但其庞大的参数量限制了其在资源受限环境中的应用。知识蒸馏(KD)通过将大型教师模型的专业知识迁移到紧凑的学生模型中,提供了一种可行的解决方案。然而,传统KD技术在应用于LLMs时面临特定挑战,包括对LLM输出的访问受限、显著的师生能力差距以及继承的错误校准问题。本研究提出PLaD,一种新颖的基于偏好的大语言模型蒸馏框架。PLaD利用师生能力差异生成伪偏好对,其中教师输出优于学生输出。随后,PLaD采用排序损失重新校准学生对序列似然的估计,从而引导学生关注理解输出的相对质量而非简单模仿教师。PLaD无需访问教师LLM的内部状态,解决了学生表达能力限制,并缓解了学生错误校准问题。通过在两个序列生成任务和多种LLMs上的大量实验,我们验证了所提出的PLaD框架的有效性。