Small CNN-based models usually require transferring knowledge from a large model before they are deployed in computationally resource-limited edge devices. Masked image modeling (MIM) methods achieve great success in various visual tasks but remain largely unexplored in knowledge distillation for heterogeneous deep models. The reason is mainly due to the significant discrepancy between the Transformer-based large model and the CNN-based small network. In this paper, we develop the first Heterogeneous Generative Knowledge Distillation (H-GKD) based on MIM, which can efficiently transfer knowledge from large Transformer models to small CNN-based models in a generative self-supervised fashion. Our method builds a bridge between Transformer-based models and CNNs by training a UNet-style student with sparse convolution, which can effectively mimic the visual representation inferred by a teacher over masked modeling. Our method is a simple yet effective learning paradigm to learn the visual representation and distribution of data from heterogeneous teacher models, which can be pre-trained using advanced generative methods. Extensive experiments show that it adapts well to various models and sizes, consistently achieving state-of-the-art performance in image classification, object detection, and semantic segmentation tasks. For example, in the Imagenet 1K dataset, H-GKD improves the accuracy of Resnet50 (sparse) from 76.98% to 80.01%.
翻译:基于CNN的小型模型在部署到计算资源受限的边缘设备前,通常需要从大型模型中迁移知识。掩码图像建模(MIM)方法在各种视觉任务中取得了巨大成功,但在异构深度模型的知识蒸馏中尚未得到充分探索。这主要是因为基于Transformer的大型模型与基于CNN的小型网络之间存在显著差异。本文首次提出了基于MIM的异构生成式知识蒸馏方法(H-GKD),该方法能够以生成式自监督方式高效地将知识从大型Transformer模型迁移至小型CNN模型。我们的方法通过训练采用稀疏卷积的UNet风格学生网络,在Transformer模型与CNN之间建立桥梁,使其能有效模仿教师模型在掩码建模过程中推断出的视觉表征。该方法是学习异构教师模型视觉表征与数据分布的简单而有效的范式,且教师模型可借助先进生成方法进行预训练。大量实验表明,该方法能良好适配多种模型与规格,在图像分类、目标检测和语义分割任务中持续取得最先进性能。例如,在ImageNet 1K数据集上,H-GKD将ResNet50(稀疏)的准确率从76.98%提升至80.01%。