Knowledge distillation methods have recently shown to be a promising direction to speedup the synthesis of large-scale diffusion models by requiring only a few inference steps. While several powerful distillation methods were recently proposed, the overall quality of student samples is typically lower compared to the teacher ones, which hinders their practical usage. In this work, we investigate the relative quality of samples produced by the teacher text-to-image diffusion model and its distilled student version. As our main empirical finding, we discover that a noticeable portion of student samples exhibit superior fidelity compared to the teacher ones, despite the "approximate" nature of the student. Based on this finding, we propose an adaptive collaboration between student and teacher diffusion models for effective text-to-image synthesis. Specifically, the distilled model produces the initial sample, and then an oracle decides whether it needs further improvements with a slow teacher model. Extensive experiments demonstrate that the designed pipeline surpasses state-of-the-art text-to-image alternatives for various inference budgets in terms of human preference. Furthermore, the proposed approach can be naturally used in popular applications such as text-guided image editing and controllable generation.
翻译:知识蒸馏方法近期被证明是一种有前景的方向,仅需少量推理步骤即可加速大规模扩散模型的合成过程。尽管最近提出了几种强大的蒸馏方法,但学生样本的整体质量通常低于教师样本,这阻碍了其实际应用。本文研究了教师文本到图像扩散模型及其蒸馏学生版本所生成样本的相对质量。通过主要实证发现,我们观察到尽管学生模型具有"近似"特性,但相当一部分学生样本在保真度上优于教师样本。基于这一发现,我们提出了一种学生与教师扩散模型间的自适应协作方法,用于高效的文本到图像合成。具体而言,蒸馏模型生成初始样本,随后由评判者决定是否需要使用较慢的教师模型进行进一步优化。大量实验表明,针对不同推理预算,所设计的流程在人类偏好方面超越了当前最先进的文本到图像替代方案。此外,该方法可自然应用于文本引导图像编辑与可控生成等流行场景。