We consider sampling from a Gibbs distribution by evolving a finite number of particles using a particular score estimator rather than Brownian motion. To accelerate the particles, we consider a second-order score-based ODE, similar to Nesterov acceleration. In contrast to traditional kernel density score estimation, we use the recently proposed regularized Wasserstein proximal method, yielding the Accelerated Regularized Wasserstein Proximal method (ARWP). We provide a detailed analysis of continuous- and discrete-time non-asymptotic and asymptotic mixing rates for Gaussian initial and target distributions, using techniques from Euclidean acceleration and accelerated information gradients. Compared with the kinetic Langevin sampling algorithm, the proposed algorithm exhibits a higher contraction rate in the asymptotic time regime. Numerical experiments are conducted across various low-dimensional experiments, including multi-modal Gaussian mixtures and ill-conditioned Rosenbrock distributions. ARWP exhibits structured and convergent particles, accelerated discrete-time mixing, and faster tail exploration than the non-accelerated regularized Wasserstein proximal method and kinetic Langevin methods. Additionally, ARWP particles exhibit better generalization properties for some non-log-concave Bayesian neural network tasks.
翻译:本文研究通过演化有限粒子系统进行吉布斯分布采样,其中采用特定得分估计量替代布朗运动驱动粒子。为加速粒子演化,我们引入类似Nesterov加速技术的二阶得分常微分方程。区别于传统核密度得分估计方法,我们采用最新提出的正则化Wasserstein近端方法,由此构建加速正则化Wasserstein近端方法(ARWP)。基于欧几里得加速与加速信息梯度技术,我们针对高斯初始分布与目标分布,系统分析了连续时间与离散时间框架下的非渐近与渐近混合速率。相较于动能Langevin采样算法,本算法在渐近时间区域展现出更高的收缩速率。数值实验涵盖多种低维场景,包括多模态高斯混合分布与病态Rosenbrock分布。实验表明:相较于非加速正则化Wasserstein近端方法与动能Langevin方法,ARWP具有结构化收敛粒子特性、加速离散时间混合能力以及更快的尾部探索速度。此外,在某些非对数凹贝叶斯神经网络任务中,ARWP粒子展现出更优的泛化特性。