Knowledge distillation (KD) is known as a promising solution to compress large language models (LLMs) via transferring their knowledge to smaller models. During this process, white-box KD methods usually minimize the distance between the output distributions of the two models so that more knowledge can be transferred. However, in the current white-box KD framework, the output distributions are from the respective output spaces of the two models, using their own prediction heads. We argue that the space discrepancy will lead to low similarity between the teacher model and the student model on both representation and distribution levels. Furthermore, this discrepancy also hinders the KD process between models with different vocabularies, which is common for current LLMs. To address these issues, we propose a dual-space knowledge distillation (DSKD) framework that unifies the output spaces of the two models for KD. On the basis of DSKD, we further develop a cross-model attention mechanism, which can automatically align the representations of the two models with different vocabularies. Thus, our framework is not only compatible with various distance functions for KD (e.g., KL divergence) like the current framework, but also supports KD between any two LLMs regardless of their vocabularies. Experiments on task-agnostic instruction-following benchmarks show that DSKD significantly outperforms the current white-box KD framework with various distance functions, and also surpasses existing KD methods for LLMs with different vocabularies.
翻译:知识蒸馏(KD)作为一种通过将大型语言模型(LLMs)的知识迁移至较小模型来实现模型压缩的有效方法而广受关注。在此过程中,白盒KD方法通常通过最小化两个模型输出分布之间的距离来促进知识迁移。然而,在当前的白盒KD框架中,输出分布分别源自两个模型各自的输出空间,并使用了各自的预测头。我们认为,这种空间差异会导致教师模型与学生模型在表示层面和分布层面的相似性降低。此外,该差异也阻碍了具有不同词表的模型之间的KD过程,而这对当前LLMs而言十分常见。为解决这些问题,我们提出了一个双空间知识蒸馏(DSKD)框架,该框架将两个模型的输出空间统一用于KD。基于DSKD,我们进一步开发了一种跨模型注意力机制,能够自动对齐具有不同词表的两个模型的表示。因此,我们的框架不仅像现有框架一样兼容多种用于KD的距离函数(如KL散度),而且支持任意两个LLMs之间的KD,无论其词表是否相同。在任务无关的指令跟随基准测试上的实验表明,DSKD在使用多种距离函数时均显著优于当前的白盒KD框架,并且在处理具有不同词表的LLMs时也超越了现有的KD方法。