Score-based diffusion models, which generate new data by learning to reverse a diffusion process that perturbs data from the target distribution into noise, have achieved remarkable success across various generative tasks. Despite their superior empirical performance, existing theoretical guarantees are often constrained by stringent assumptions or suboptimal convergence rates. In this paper, we establish a fast convergence theory for a popular SDE-based sampler under minimal assumptions. Our analysis shows that, provided $\ell_{2}$-accurate estimates of the score functions, the total variation distance between the target and generated distributions is upper bounded by $O(d/T)$ (ignoring logarithmic factors), where $d$ is the data dimensionality and $T$ is the number of steps. This result holds for any target distribution with finite first-order moment. To our knowledge, this improves upon existing convergence theory for both the SDE-based sampler and another ODE-based sampler, while imposing minimal assumptions on the target data distribution and score estimates. This is achieved through a novel set of analytical tools that provides a fine-grained characterization of how the error propagates at each step of the reverse process.
翻译:基于分数的扩散模型通过学习逆转将目标分布数据扰动为噪声的扩散过程来生成新数据,已在各类生成任务中取得显著成功。尽管其经验性能优异,现有理论保证往往受限于严格假设或次优收敛速率。本文针对一种流行的基于SDE的采样器,在最小假设下建立了快速收敛理论。分析表明,在获得$\ell_{2}$意义下准确的分数函数估计时,目标分布与生成分布之间的总变差距离上界为$O(d/T)$(忽略对数因子),其中$d$为数据维度,$T$为步数。该结果对任意具有一阶矩的目标分布均成立。据我们所知,这一结论同时改进了基于SDE采样器与另一基于ODE采样器的现有收敛理论,且对目标数据分布和分数估计施加了最小假设。该成果通过一套新颖的分析工具实现,该工具能精细刻画逆向过程中每一步的误差传播机制。