While score-based diffusion models have achieved exceptional sampling quality, their sampling speeds are often limited by the high computational burden of score function evaluations. Despite the recent remarkable empirical advances in speeding up the score-based samplers, theoretical understanding of acceleration techniques remains largely limited. To bridge this gap, we propose a novel training-free acceleration scheme for stochastic samplers. Under minimal assumptions -- namely, $L^2$-accurate score estimates and a finite second-moment condition on the target distribution -- our accelerated sampler provably achieves $\varepsilon$-accuracy in total variation within $\widetilde{O}(d^{5/4}/\sqrt{\varepsilon})$ iterations, thereby significantly improving upon the $\widetilde{O}(d/\varepsilon)$ iteration complexity of standard score-based samplers. Notably, our convergence theory does not rely on restrictive assumptions on the target distribution or higher-order score estimation guarantees.
翻译:尽管基于分数的扩散模型已实现卓越的采样质量,其采样速度常受限于分数函数计算的高计算负担。尽管近期在加速基于分数采样器方面取得了显著的实证进展,对加速技术的理论理解仍存在较大局限。为弥合此差距,我们提出一种新型免训练随机采样器加速方案。在最小假设下——即$L^2$精确的分数估计与目标分布的二阶矩有限条件——我们的加速采样器在总变差距离上可证明以$\widetilde{O}(d^{5/4}/\sqrt{\varepsilon})$迭代次数达到$\varepsilon$精度,从而显著改进标准基于分数采样器$\widetilde{O}(d/\varepsilon)$的迭代复杂度。值得注意的是,我们的收敛理论不依赖于目标分布的严格假设或高阶分数估计保证。