Large pretrained visual models exhibit remarkable generalization across diverse recognition tasks. Yet, real-world applications often demand compact models tailored to specific problems. Variants of knowledge distillation have been devised for such a purpose, enabling task-specific compact models (the students) to learn from a generic large pretrained one (the teacher). In this paper, we show that the excellent robustness and versatility of recent pretrained models challenge common practices established in the literature, calling for a new set of optimal guidelines for task-specific distillation. To address the lack of samples in downstream tasks, we also show that a variant of Mixup based on stable diffusion complements standard data augmentation. This strategy eliminates the need for engineered text prompts and improves distillation of generic models into streamlined specialized networks.
翻译:大规模预训练视觉模型在多种识别任务中展现出卓越的泛化能力。然而,实际应用通常需要针对特定问题定制的紧凑型模型。为此,研究者设计了多种知识蒸馏变体,使面向特定任务的紧凑模型(学生网络)能够从通用的大规模预训练模型(教师网络)中学习。本文表明,当前预训练模型优异的鲁棒性与通用性对文献中建立的常规实践提出了挑战,需要制定一套针对特定任务蒸馏的新优化准则。针对下游任务样本不足的问题,我们还证明基于稳定扩散的混合增强变体能够补充标准数据增强策略。该策略消除了对人工设计文本提示的需求,并提升了通用模型向精简专用网络蒸馏的效果。