Previous knowledge distillation (KD) methods mostly focus on compressing network architectures, which is not thorough enough in deployment as some costs like transmission bandwidth and imaging equipment are related to the image size. Therefore, we propose Pixel Distillation that extends knowledge distillation into the input level while simultaneously breaking architecture constraints. Such a scheme can achieve flexible cost control for deployment, as it allows the system to adjust both network architecture and image quality according to the overall requirement of resources. Specifically, we first propose an input spatial representation distillation (ISRD) mechanism to transfer spatial knowledge from large images to student's input module, which can facilitate stable knowledge transfer between CNN and ViT. Then, a Teacher-Assistant-Student (TAS) framework is further established to disentangle pixel distillation into the model compression stage and input compression stage, which significantly reduces the overall complexity of pixel distillation and the difficulty of distilling intermediate knowledge. Finally, we adapt pixel distillation to object detection via an aligned feature for preservation (AFP) strategy for TAS, which aligns output dimensions of detectors at each stage by manipulating features and anchors of the assistant. Comprehensive experiments on image classification and object detection demonstrate the effectiveness of our method. Code is available at https://github.com/gyguo/PixelDistillation.
翻译:以往的知识蒸馏方法大多聚焦于压缩网络架构,这在部署中并不彻底,因为传输带宽与成像设备等成本与图像尺寸相关。为此,我们提出像素蒸馏,将知识蒸馏延伸至输入层面,同时突破架构限制。该方案能实现部署成本的灵活调控,使系统可根据资源整体需求同时调整网络架构与图像质量。具体而言,我们首先提出输入空间表示蒸馏机制,将大图像的空间知识迁移至学生网络的输入模块,从而促进CNN与ViT之间的稳定知识迁移。随后构建教师-助教-学生框架,将像素蒸馏解耦为模型压缩与输入压缩两个阶段,显著降低了像素蒸馏的整体复杂度及中间知识蒸馏的难度。最后,我们通过特征对齐保持策略将像素蒸馏适配至目标检测任务,该策略通过调控助教网络的特征与锚框,实现各阶段检测器输出维度的对齐。在图像分类与目标检测任务上的综合实验验证了本方法的有效性。代码发布于https://github.com/gyguo/PixelDistillation。