Text-to-image (T2I) generation with Stable Diffusion models (SDMs) involves high computing demands due to billion-scale parameters. To enhance efficiency, recent studies have reduced sampling steps and applied network quantization while retaining the original architectures. The lack of architectural reduction attempts may stem from worries over expensive retraining for such massive models. In this work, we uncover the surprising potential of block pruning and feature distillation for low-cost general-purpose T2I. By removing several residual and attention blocks from the U-Net of SDMs, we achieve 30%~50% reduction in model size, MACs, and latency. We show that distillation retraining is effective even under limited resources: using only 13 A100 days and a tiny dataset, our compact models can imitate the original SDMs (v1.4 and v2.1-base with over 6,000 A100 days). Benefiting from the transferred knowledge, our BK-SDMs deliver competitive results on zero-shot MS-COCO against larger multi-billion parameter models. We further demonstrate the applicability of our lightweight backbones in personalized generation and image-to-image translation. Deployment of our models on edge devices attains 4-second inference. Code and models can be found at: https://github.com/Nota-NetsPresso/BK-SDM
翻译:基于Stable Diffusion模型(SDMs)的文生图(T2I)生成因具有数十亿级参数而面临高昂的计算需求。为提升效率,近期研究在保持原始架构的同时,通过减少采样步数及应用网络量化来优化性能。然而,针对架构精简的尝试仍较为缺乏,这可能源于对此类大规模模型进行昂贵重训练的担忧。本研究发现,块级剪枝与特征蒸馏在实现低成本通用T2I任务方面具有显著潜力。通过移除SDMs中U-Net的若干残差块与注意力块,我们实现了模型大小、MACs运算量及推理延迟降低30%~50%。实验表明,即使在有限资源下蒸馏重训练依然有效:仅使用13个A100训练日与小型数据集,我们的紧凑模型即可模仿原始SDMs(需超过6,000个A100训练日的v1.4与v2.1-base版本)。得益于迁移的知识,我们的BK-SDMs在零样本MS-COCO评测中取得了与更大规模数十亿参数模型相竞争的结果。我们进一步验证了所提轻量级主干网络在个性化生成与图像到图像转换任务中的适用性。在边缘设备上部署我们的模型可实现4秒级推理。代码与模型发布于:https://github.com/Nota-NetsPresso/BK-SDM