Recent advancements in foundation models have yielded impressive performance across a wide range of tasks. Meanwhile, for specific applications, practitioners have been developing specialized application models. To enjoy the benefits of both kinds of models, one natural path is to transfer the knowledge in foundation models into specialized application models, which are generally more efficient for serving. Techniques from knowledge distillation may be applied here, where the application model learns to mimic the foundation model. However, specialized application models and foundation models have substantial gaps in capacity, employing distinct architectures, using different input features from different modalities, and being optimized on different distributions. These differences in model characteristics lead to significant challenges for distillation methods. In this work, we propose creating a teaching committee comprising both foundation model teachers and complementary teachers. Complementary teachers possess model characteristics akin to the student's, aiming to bridge the gap between the foundation model and specialized application models for a smoother knowledge transfer. Further, to accommodate the dissimilarity among the teachers in the committee, we introduce DiverseDistill, which allows the student to understand the expertise of each teacher and extract task knowledge. Our evaluations demonstrate that adding complementary teachers enhances student performance. Finally, DiverseDistill consistently outperforms baseline distillation methods, regardless of the teacher choices, resulting in significantly improved student performance.
翻译:近期基础模型的进展已在广泛任务中展现出卓越性能。与此同时,针对特定应用场景,研究人员持续开发专业应用模型。为兼顾两类模型的优势,自然路径是将基础模型的知识迁移至通常更具服务效率的专业应用模型。知识蒸馏技术可在此场景中应用,使应用模型通过学习模仿基础模型。然而,专业应用模型与基础模型在容量上存在显著差距:采用不同架构、使用来自不同模态的异构输入特征,并在不同数据分布上优化。这些模型特性差异为蒸馏方法带来重大挑战。本研究提出构建教学委员会,同时包含基础模型教师与互补型教师。互补型教师具有与学生模型相似的特性,旨在弥合基础模型与专业应用模型间的鸿沟,实现更平滑的知识迁移。进一步地,为应对委员会中教师间的异质性,我们提出DiverseDistill方法,使学生模型能理解每位教师的专长领域并提取任务知识。实验评估表明,引入互补型教师可提升学生模型性能。最终,无论教师模型如何选择,DiverseDistill始终优于基准蒸馏方法,显著提升学生模型表现。