We study the scaling properties of latent diffusion models (LDMs) with an emphasis on their sampling efficiency. While improved network architecture and inference algorithms have shown to effectively boost sampling efficiency of diffusion models, the role of model size -- a critical determinant of sampling efficiency -- has not been thoroughly examined. Through empirical analysis of established text-to-image diffusion models, we conduct an in-depth investigation into how model size influences sampling efficiency across varying sampling steps. Our findings unveil a surprising trend: when operating under a given inference budget, smaller models frequently outperform their larger equivalents in generating high-quality results. Moreover, we extend our study to demonstrate the generalizability of the these findings by applying various diffusion samplers, exploring diverse downstream tasks, evaluating post-distilled models, as well as comparing performance relative to training compute. These findings open up new pathways for the development of LDM scaling strategies which can be employed to enhance generative capabilities within limited inference budgets.
翻译:本研究聚焦于潜在扩散模型(LDMs)的缩放特性,尤其关注其采样效率。尽管改进的网络架构与推理算法已被证明能有效提升扩散模型的采样效率,但模型规模——这一影响采样效率的关键因素——尚未得到充分探究。通过对现有文本到图像扩散模型进行实证分析,我们深入研究了模型规模如何在不同采样步数下影响采样效率。我们的发现揭示了一个令人惊讶的趋势:在给定的推理计算预算下,较小规模的模型在生成高质量结果方面常常优于其更大规模的对应模型。此外,我们通过应用多种扩散采样器、探索不同的下游任务、评估后蒸馏模型,以及比较相对于训练计算量的性能,扩展了研究范围,证明了这些发现的普适性。这些发现为开发LDM缩放策略开辟了新途径,该策略可用于在有限推理预算内提升生成能力。