Training diffusion models is always a computation-intensive task. In this paper, we introduce a novel speed-up method for diffusion model training, called, which is based on a closer look at time steps. Our key findings are: i) Time steps can be empirically divided into acceleration, deceleration, and convergence areas based on the process increment. ii) These time steps are imbalanced, with many concentrated in the convergence area. iii) The concentrated steps provide limited benefits for diffusion training. To address this, we design an asymmetric sampling strategy that reduces the frequency of steps from the convergence area while increasing the sampling probability for steps from other areas. Additionally, we propose a weighting strategy to emphasize the importance of time steps with rapid-change process increments. As a plug-and-play and architecture-agnostic approach, SpeeD consistently achieves 3-times acceleration across various diffusion architectures, datasets, and tasks. Notably, due to its simple design, our approach significantly reduces the cost of diffusion model training with minimal overhead. Our research enables more researchers to train diffusion models at a lower cost.
翻译:扩散模型的训练始终是一项计算密集型任务。本文提出一种新颖的扩散模型训练加速方法SpeeD,该方法基于对时间步的细粒度分析。我们的核心发现包括:i) 根据过程增量,时间步可经验性地划分为加速区、减速区与收敛区;ii) 这些时间步的分布呈现不均衡性,大量步骤集中于收敛区;iii) 密集分布的收敛区步骤对扩散训练提供的收益有限。为解决此问题,我们设计了一种非对称采样策略,在降低收敛区步骤采样频率的同时,提升其他区域步骤的采样概率。此外,我们提出一种加权策略以强调过程增量快速变化的时间步的重要性。作为一种即插即用且架构无关的方法,SpeeD在多种扩散架构、数据集及任务中均能实现三倍的稳定加速。值得注意的是,得益于其简洁的设计,本方法能以极低开销显著降低扩散模型的训练成本。我们的研究将使更多研究者能够以更低成本训练扩散模型。