Causal language models have demonstrated remarkable capabilities, but their size poses significant challenges for deployment in resource-constrained environments. Knowledge distillation, a widely-used technique for transferring knowledge from a large teacher model to a small student model, presents a promising approach for model compression. A significant remaining issue lies in the major differences between teacher and student models, namely the substantial capacity gap, mode averaging, and mode collapse, which pose barriers during distillation. To address these issues, we introduce $\textit{Temporally Adaptive Interpolated Distillation (TAID)}$, a novel knowledge distillation approach that dynamically interpolates student and teacher distributions through an adaptive intermediate distribution, gradually shifting from the student's initial distribution towards the teacher's distribution. We provide a theoretical analysis demonstrating TAID's ability to prevent mode collapse and empirically show its effectiveness in addressing the capacity gap while balancing mode averaging and mode collapse. Our comprehensive experiments demonstrate TAID's superior performance across various model sizes and architectures in both instruction tuning and pre-training scenarios. Furthermore, we showcase TAID's practical impact by developing two state-of-the-art compact foundation models: $\texttt{TAID-LLM-1.5B}$ for language tasks and $\texttt{TAID-VLM-2B}$ for vision-language tasks. These results demonstrate TAID's effectiveness in creating high-performing and efficient models, advancing the development of more accessible AI technologies.
翻译:因果语言模型已展现出卓越的能力,但其规模给资源受限环境下的部署带来了重大挑战。知识蒸馏作为一种广泛使用的技术,可将大型教师模型的知识迁移至小型学生模型,为模型压缩提供了一种前景广阔的途径。当前存在的一个关键问题在于教师模型与学生模型之间存在显著差异,具体表现为巨大的容量差距、模式平均化与模式坍塌,这些差异在蒸馏过程中构成了障碍。为解决这些问题,我们提出了$\textit{时序自适应插值蒸馏(TAID)}$,这是一种新颖的知识蒸馏方法,通过自适应中间分布动态插值学生与教师的分布,使学生分布从初始状态逐步向教师分布过渡。我们通过理论分析证明了TAID能够有效防止模式坍塌,并通过实验验证了其在解决容量差距问题的同时,能够平衡模式平均化与模式坍塌。全面的实验结果表明,TAID在不同模型规模与架构中,无论是指令微调还是预训练场景下,均表现出卓越的性能。此外,我们通过开发两个先进的紧凑基础模型展示了TAID的实际应用价值:面向语言任务的$\texttt{TAID-LLM-1.5B}$与面向视觉-语言任务的$\texttt{TAID-VLM-2B}$。这些成果证明了TAID在构建高性能、高效率模型方面的有效性,推动了更易普及的人工智能技术的发展。