Continuous normalizing flows (CNFs) learn the probability path between a reference distribution and a target distribution by modeling the vector field generating said path using neural networks. Recently, Lipman et al. (2022) introduced a simple and inexpensive method for training CNFs in generative modeling, termed flow matching (FM). In this paper, we repurpose this method for probabilistic inference by incorporating Markovian sampling methods in evaluating the FM objective, and using the learned CNF to improve Monte Carlo sampling. Specifically, we propose an adaptive Markov chain Monte Carlo (MCMC) algorithm, which combines a local Markov transition kernel with a non-local, flow-informed transition kernel, defined using a CNF. This CNF is adapted on-the-fly using samples from the Markov chain, which are used to specify the probability path for the FM objective. Our method also includes an adaptive tempering mechanism that allows the discovery of multiple modes in the target distribution. Under mild assumptions, we establish convergence of our method to a local optimum of the FM objective. We then benchmark our approach on several synthetic and real-world examples, achieving similar performance to other state-of-the-art methods, but often at a significantly lower computational cost.
翻译:连续归一化流(CNFs)通过神经网络建模生成概率路径的向量场,学习参考分布与目标分布之间的概率路径。最近,Lipman等人(2022)提出了一种简单且低成本的CNFs训练方法,称为流匹配(FM)。本文通过将马尔可夫采样方法融入FM目标函数的评估,并利用学习到的CNF改进蒙特卡洛采样,将该方法重新应用于概率推断。具体而言,我们提出了一种自适应马尔可夫链蒙特卡洛(MCMC)算法,该算法将局部马尔可夫转移核与基于CNF定义的非局部流信息转移核相结合。该CNF利用马尔可夫链生成的样本进行实时自适应调整,这些样本用于指定FM目标的概率路径。我们的方法还包含自适应退火机制,能够发现目标分布中的多模态结构。在温和假设下,我们证明了该方法能收敛至FM目标的局部最优解。随后,我们在多个合成与真实数据集上对所提方法进行基准测试,其性能与当前最先进方法相当,但计算成本通常显著降低。