Diffusion models, which convert noise into new data instances by learning to reverse a Markov diffusion process, have become a cornerstone in contemporary generative modeling. While their practical power has now been widely recognized, the theoretical underpinnings remain far from mature. In this work, we develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models in discrete time, assuming access to $\ell_2$-accurate estimates of the (Stein) score functions. For a popular deterministic sampler (based on the probability flow ODE), we establish a convergence rate proportional to $1/T$ (with $T$ the total number of steps), improving upon past results; for another mainstream stochastic sampler (i.e., a type of the denoising diffusion probabilistic model), we derive a convergence rate proportional to $1/\sqrt{T}$, matching the state-of-the-art theory. Imposing only minimal assumptions on the target data distribution (e.g., no smoothness assumption is imposed), our results characterize how $\ell_2$ score estimation errors affect the quality of the data generation processes. In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach without resorting to toolboxes for SDEs and ODEs. Further, we design two accelerated variants, improving the convergence to $1/T^2$ for the ODE-based sampler and $1/T$ for the DDPM-type sampler, which might be of independent theoretical and empirical interest.
翻译:扩散模型通过学习逆转马尔可夫扩散过程将噪声转化为新数据实例,已成为当代生成建模的基石。尽管其实用能力已获得广泛认可,但其理论基础仍远未成熟。在本工作中,我们发展了一套非渐近理论,旨在理解离散时间下扩散模型的数据生成过程,并假设可获取(Stein)得分函数的$\ell_2$精确估计。针对一种流行的确定性采样器(基于概率流常微分方程),我们建立了与$1/T$($T$为总步数)成正比的收敛速率,改进了先前结果;针对另一种主流随机采样器(即一种去噪扩散概率模型),我们推导出与$1/\sqrt{T}$成正比的收敛速率,达到当前最优理论水平。在对目标数据分布施加最小假设(如未假设平滑性)的条件下,我们的结果刻画了$\ell_2$得分估计误差如何影响数据生成过程的质量。与先前工作不同,我们的理论基于一种基础而通用的非渐近方法,未借助随机微分方程和常微分方程的工具箱。此外,我们设计了两种加速变体,将基于常微分方程的采样器收敛速率提升至$1/T^2$,将基于去噪扩散概率模型类的采样器收敛速率提升至$1/T$,这可能在理论和实证方面具有独立价值。