This paper concerns the mathematical analyses of the diffusion model in machine learning. The drift term of the backward sampling process is represented as a conditional expectation involving the data distribution and the forward diffusion. The training process aims to find such a drift function by minimizing the mean-squared residue related to the conditional expectation. Using small-time approximations of the Green's function of the forward diffusion, we show that the analytical mean drift function in DDPM and the score function in SGM asymptotically blow up in the final stages of the sampling process for singular data distributions such as those concentrated on lower-dimensional manifolds, and are therefore difficult to approximate by a network. To overcome this difficulty, we derive a new target function and associated loss, which remains bounded even for singular data distributions. We validate the theoretical findings with several numerical examples.
翻译:本文对机器学习中的扩散模型进行数学分析。反向采样过程的漂移项可表示为涉及数据分布和前向扩散的条件期望。训练过程旨在通过最小化与该条件期望相关的均方残差来寻找此类漂移函数。利用前向扩散格林函数的小时间近似,我们证明对于集中在低维流形等奇异数据分布,DDPM中的解析平均漂移函数和SGM中的得分函数在采样过程的最终阶段会渐近发散,因此难以通过神经网络进行逼近。为克服此困难,我们推导出一个新的目标函数及相应损失函数,该函数即使对于奇异数据分布也保持有界。我们通过若干数值算例验证了理论结果。