In the era of Large Language Models (LLMs), Knowledge Distillation (KD) emerges as a pivotal methodology for transferring advanced capabilities from leading proprietary LLMs, such as GPT-4, to their open-source counterparts like LLaMA and Mistral. Additionally, as open-source LLMs flourish, KD plays a crucial role in both compressing these models, and facilitating their self-improvement by employing themselves as teachers. This paper presents a comprehensive survey of KD's role within the realm of LLM, highlighting its critical function in imparting advanced knowledge to smaller models and its utility in model compression and self-improvement. Our survey is meticulously structured around three foundational pillars: \textit{algorithm}, \textit{skill}, and \textit{verticalization} -- providing a comprehensive examination of KD mechanisms, the enhancement of specific cognitive abilities, and their practical implications across diverse fields. Crucially, the survey navigates the intricate interplay between data augmentation (DA) and KD, illustrating how DA emerges as a powerful paradigm within the KD framework to bolster LLMs' performance. By leveraging DA to generate context-rich, skill-specific training data, KD transcends traditional boundaries, enabling open-source models to approximate the contextual adeptness, ethical alignment, and deep semantic insights characteristic of their proprietary counterparts. This work aims to provide an insightful guide for researchers and practitioners, offering a detailed overview of current methodologies in KD and proposing future research directions. Importantly, we firmly advocate for compliance with the legal terms that regulate the use of LLMs, ensuring ethical and lawful application of KD of LLMs. An associated Github repository is available at https://github.com/Tebmer/Awesome-Knowledge-Distillation-of-LLMs.
翻译:在大语言模型(LLM)时代,知识蒸馏(KD)已成为将先进能力从领先的专有大语言模型(如GPT-4)迁移至开源模型(如LLaMA和Mistral)的关键方法。此外,随着开源大语言模型的蓬勃发展,知识蒸馏在模型压缩以及通过自我教学实现模型自我提升方面均发挥着至关重要的作用。本文全面综述了知识蒸馏在大语言模型领域中的作用,重点阐述了其在向小型模型传授高级知识方面的关键功能,以及在模型压缩与自我改进中的实用价值。本综述围绕三大基础支柱——\textit{算法}、\textit{技能}与\textit{垂直领域应用}——精心构建,系统审视了知识蒸馏的机制、特定认知能力的增强及其在不同领域的实际影响。尤为关键的是,本文深入探讨了数据增强(DA)与知识蒸馏之间复杂的相互作用,阐明了数据增强如何作为知识蒸馏框架内一种强大的范式来提升大语言模型的性能。通过利用数据增强生成语境丰富、技能针对性的训练数据,知识蒸馏突破了传统边界,使得开源模型能够逼近专有模型所具备的语境适应能力、伦理对齐性以及深层语义理解。本工作旨在为研究人员和实践者提供一份具有洞察力的指南,详细概述当前知识蒸馏的方法论并提出未来研究方向。我们特别强调必须遵守规范大语言模型使用的法律条款,确保大语言模型知识蒸馏的伦理与合法应用。相关Github仓库地址为:https://github.com/Tebmer/Awesome-Knowledge-Distillation-of-LLMs。