Causal language models have demonstrated remarkable capabilities, but their size poses significant challenges for deployment in resource-constrained environments. Knowledge distillation, a widely-used technique for transferring knowledge from a large teacher model to a small student model, presents a promising approach for model compression. A significant remaining issue lies in the major differences between teacher and student models, namely the substantial capacity gap, mode averaging, and mode collapse, which pose barriers during distillation. To address these issues, we introduce $\textit{Temporally Adaptive Interpolated Distillation (TAID)}$, a novel knowledge distillation approach that dynamically interpolates student and teacher distributions through an adaptive intermediate distribution, gradually shifting from the student's initial distribution towards the teacher's distribution. We provide a theoretical analysis demonstrating TAID's ability to prevent mode collapse and empirically show its effectiveness in addressing the capacity gap while balancing mode averaging and mode collapse. Our comprehensive experiments demonstrate TAID's superior performance across various model sizes and architectures in both instruction tuning and pre-training scenarios. Furthermore, we showcase TAID's practical impact by developing two state-of-the-art compact foundation models: $\texttt{TAID-LLM-1.5B}$ for language tasks and $\texttt{TAID-VLM-2B}$ for vision-language tasks. These results demonstrate TAID's effectiveness in creating high-performing and efficient models, advancing the development of more accessible AI technologies.
翻译:因果语言模型已展现出卓越的能力,但其规模对资源受限环境下的部署构成了重大挑战。知识蒸馏作为一种广泛使用的技术,可将大型教师模型的知识迁移至小型学生模型,为模型压缩提供了有前景的途径。当前存在的一个关键问题在于教师模型与学生模型之间存在显著差异,具体表现为巨大的容量差距、模式平均化以及模式崩溃,这些差异在蒸馏过程中形成了障碍。为解决这些问题,我们提出了**时间自适应插值蒸馏(TAID)**,这是一种新颖的知识蒸馏方法,通过自适应中间分布动态插值学生与教师的分布,使学生模型的初始分布逐步向教师分布过渡。我们通过理论分析证明了TAID能够有效防止模式崩溃,并通过实验验证了其在平衡模式平均化与模式崩溃的同时,能够有效缓解容量差距问题。系统的实验结果表明,TAID在不同模型规模与架构中,在指令微调与预训练场景下均表现出优越性能。此外,我们通过开发两个先进的紧凑基础模型展示了TAID的实际应用价值:面向语言任务的**TAID-LLM-1.5B**与面向视觉语言任务的**TAID-VLM-2B**。这些成果证明了TAID在构建高性能高效模型方面的有效性,推动了更易普及的人工智能技术的发展。