We present algorithms for diffusion model sampling which obtain $δ$-error in $\mathrm{polylog}(1/δ)$ steps, given access to $\widetilde O(δ)$-accurate score estimates in $L^2$. This is an exponential improvement over all previous results. Specifically, under minimal data assumptions, the complexity is $\widetilde O(d\,\mathrm{polylog}(1/δ))$ where $d$ is the dimension of the data; under a non-uniform $L$-Lipschitz condition, the complexity is $\widetilde O(\sqrt{dL}\,\mathrm{polylog}(1/δ))$; and if the data distribution has intrinsic dimension $d_\star$, then the complexity reduces to $\widetilde O(d_\star\,\mathrm{polylog}(1/δ))$. Our approach also yields the first $\mathrm{polylog}(1/δ)$ complexity sampler for general log-concave distributions using only gradient evaluations.
翻译:本文提出了扩散模型采样算法,在给定 $L^2$ 范数下 $\widetilde O(δ)$ 精度得分估计的条件下,能以 $\mathrm{polylog}(1/δ)$ 步数实现 $δ$ 误差。这相比以往所有结果实现了指数级改进。具体而言:在最小数据假设下,计算复杂度为 $\widetilde O(d\,\mathrm{polylog}(1/δ))$,其中 $d$ 为数据维度;在非均匀 $L$-Lipschitz 条件下,复杂度为 $\widetilde O(\sqrt{dL}\,\mathrm{polylog}(1/δ))$;若数据分布具有本征维度 $d_\star$,则复杂度可降至 $\widetilde O(d_\star\,\mathrm{polylog}(1/δ))$。该方法还首次实现了仅通过梯度评估即可对一般对数凹分布进行 $\mathrm{polylog}(1/δ)$ 复杂度采样的算法。