Data-Free Knowledge Distillation (DFKD) has shown great potential in creating a compact student model while alleviating the dependency on real training data by synthesizing surrogate data. However, prior arts are seldom discussed under distribution shifts, which may be vulnerable in real-world applications. Recent Vision-Language Foundation Models, e.g., CLIP, have demonstrated remarkable performance in zero-shot out-of-distribution generalization, yet consuming heavy computation resources. In this paper, we discuss the extension of DFKD to Vision-Language Foundation Models without access to the billion-level image-text datasets. The objective is to customize a student model for distribution-agnostic downstream tasks with given category concepts, inheriting the out-of-distribution generalization capability from the pre-trained foundation models. In order to avoid generalization degradation, the primary challenge of this task lies in synthesizing diverse surrogate images driven by text prompts. Since not only category concepts but also style information are encoded in text prompts, we propose three novel Prompt Diversification methods to encourage image synthesis with diverse styles, namely Mix-Prompt, Random-Prompt, and Contrastive-Prompt. Experiments on out-of-distribution generalization datasets demonstrate the effectiveness of the proposed methods, with Contrastive-Prompt performing the best.
翻译:无数据知识蒸馏(DFKD)通过合成替代数据,在创建紧凑学生模型的同时减少对真实训练数据的依赖,展现出巨大潜力。然而,先前研究很少讨论分布偏移下的场景,这在现实应用中可能存在脆弱性。最近的视觉-语言基础模型(如CLIP)在零样本分布外泛化方面表现出卓越性能,但需要消耗大量计算资源。本文探讨了在无法访问数十亿级图文数据集的情况下,将DFKD扩展到视觉-语言基础模型的方法。目标是为具有给定类别概念的分布无关下游任务定制学生模型,使其继承预训练基础模型的分布外泛化能力。为避免泛化性能下降,该任务的核心挑战在于通过文本提示驱动合成多样化的替代图像。由于文本提示不仅编码类别概念,还包含风格信息,我们提出了三种新颖的提示多样化方法——混合提示、随机提示和对比提示——以促进具有多样风格的图像合成。在分布外泛化数据集上的实验证明了所提方法的有效性,其中对比提示表现最佳。