Training diffusion models is always a computation-intensive task. In this paper, we introduce a novel speed-up method for diffusion model training, called, which is based on a closer look at time steps. Our key findings are: i) Time steps can be empirically divided into acceleration, deceleration, and convergence areas based on the process increment. ii) These time steps are imbalanced, with many concentrated in the convergence area. iii) The concentrated steps provide limited benefits for diffusion training. To address this, we design an asymmetric sampling strategy that reduces the frequency of steps from the convergence area while increasing the sampling probability for steps from other areas. Additionally, we propose a weighting strategy to emphasize the importance of time steps with rapid-change process increments. As a plug-and-play and architecture-agnostic approach, SpeeD consistently achieves 3-times acceleration across various diffusion architectures, datasets, and tasks. Notably, due to its simple design, our approach significantly reduces the cost of diffusion model training with minimal overhead. Our research enables more researchers to train diffusion models at a lower cost.
翻译:训练扩散模型始终是一项计算密集型任务。本文提出一种新颖的扩散模型训练加速方法,该方法基于对时间步的更精细分析。我们的核心发现包括:i) 根据过程增量,时间步可经验性地划分为加速区、减速区与收敛区;ii) 这些时间步呈现非均衡分布,大量步骤集中于收敛区;iii) 集中分布的步骤对扩散训练提供的收益有限。为解决此问题,我们设计了一种非对称采样策略,降低收敛区步骤的采样频率,同时提高其他区域步骤的采样概率。此外,我们提出一种加权策略,以强调具有快速变化过程增量的时间步的重要性。作为一种即插即用且架构无关的方法,该方法在各种扩散架构、数据集和任务中均能实现三倍加速。值得注意的是,得益于其简洁的设计,我们的方法能以极低开销显著降低扩散模型的训练成本。本研究使更多研究者能够以更低成本训练扩散模型。