We theoretically analyze the original version of the denoising diffusion probabilistic models (DDPMs) presented in Ho, J., Jain, A., and Abbeel, P., Advances in Neural Information Processing Systems, 33 (2020), pp. 6840-6851. Our main theorem states that the sequence constructed by the original DDPM sampling algorithm weakly converges to a given data distribution as the number of time steps goes to infinity, under some asymptotic conditions on the parameters for the variance schedule, the $L^2$-based score estimation error, and the noise estimating function with respect to the number of time steps. In proving the theorem, we reveal that the sampling sequence can be seen as an exponential integrator type approximation of a reverse time stochastic differential equation over a finite time interval.
翻译:我们对Ho, J., Jain, A., and Abbeel, P. 在《Advances in Neural Information Processing Systems》第33卷(2020年,第6840-6851页)中提出的原始去噪扩散概率模型(DDPMs)进行了理论分析。我们的主要定理表明:在方差调度参数、基于$L^2$的分数估计误差以及噪声估计函数相对于时间步数满足某些渐近条件的前提下,原始DDPM采样算法构建的序列随着时间步数趋于无穷时,会弱收敛于给定的数据分布。在证明该定理的过程中,我们发现采样序列可被视为有限时间区间上反向时间随机微分方程的指数积分器型近似。