While score-based diffusion models have achieved exceptional sampling quality, their sampling speeds are often limited by the high computational burden of score function evaluations. Despite the recent remarkable empirical advances in speeding up the score-based samplers, theoretical understanding of acceleration techniques remains largely limited. To bridge this gap, we propose a novel training-free acceleration scheme for stochastic samplers. Under minimal assumptions -- namely, $L^2$-accurate score estimates and a finite second-moment condition on the target distribution -- our accelerated sampler provably achieves $\varepsilon$-accuracy in total variation within $\widetilde{O}(d^{5/4}/\sqrt{\varepsilon})$ iterations, thereby significantly improving upon the $\widetilde{O}(d/\varepsilon)$ iteration complexity of standard score-based samplers. Notably, our convergence theory does not rely on restrictive assumptions on the target distribution or higher-order score estimation guarantees.
翻译:尽管基于分数的扩散模型已实现卓越的采样质量,但其采样速度常受限于分数函数评估的高计算负担。尽管近期在加速基于分数的采样器方面取得了显著的实证进展,但对加速技术的理论理解仍存在较大局限。为弥合这一差距,我们提出一种新型免训练随机采样器加速方案。在最小假设条件下——即$L^2$精确的分数估计与目标分布具有有限二阶矩——我们的加速采样器可证明在$\widetilde{O}(d^{5/4}/\sqrt{\varepsilon})$迭代次数内达到总变差意义上的$\varepsilon$精度,从而显著优于标准基于分数采样器的$\widetilde{O}(d/\varepsilon)$迭代复杂度。值得注意的是,我们的收敛理论不依赖于目标分布的严格假设或高阶分数估计保证。