Recently, through a unified gradient flow perspective of Markov chain Monte Carlo (MCMC) and variational inference (VI), particle-based variational inference methods (ParVIs) have been proposed that tend to combine the best of both worlds. While typical ParVIs such as Stein Variational Gradient Descent (SVGD) approximate the gradient flow within a reproducing kernel Hilbert space (RKHS), many attempts have been made recently to replace RKHS with more expressive function spaces, such as neural networks. While successful, these methods are mainly designed for sampling from unconstrained domains. In this paper, we offer a general solution to constrained sampling by introducing a boundary condition for the gradient flow which would confine the particles within the specific domain. This allows us to propose a new functional gradient ParVI method for constrained sampling, called constrained functional gradient flow (CFG), with provable continuous-time convergence in total variation (TV). We also present novel numerical strategies to handle the boundary integral term arising from the domain constraints. Our theory and experiments demonstrate the effectiveness of the proposed framework.
翻译:近年来,通过马尔可夫链蒙特卡洛(MCMC)与变分推断(VI)的统一梯度流视角,提出了基于粒子的变分推断方法(ParVIs),旨在结合两者的优势。典型的ParVIs(如Stein变分梯度下降(SVGD))在再生核希尔伯特空间(RKHS)内近似梯度流,而近期许多研究尝试用更具表达能力的函数空间(如神经网络)替代RKHS。尽管这些方法取得了成功,但它们主要设计用于无约束域的采样。本文通过为梯度流引入边界条件——将粒子限制在特定域内——为约束采样提供了一个通用解决方案。这使我们能够提出一种新的用于约束采样的函数梯度ParVI方法,称为约束函数梯度流(CFG),该方法在总变差(TV)意义下具有可证明的连续时间收敛性。我们还提出了新颖的数值策略来处理由域约束产生的边界积分项。我们的理论与实验证明了所提出框架的有效性。