In this paper, we introduce a new approach for integrating score-based models with the Metropolis-Hastings algorithm. While traditional score-based diffusion models excel in accurately learning the score function from data points, they lack an energy function, making the Metropolis-Hastings adjustment step inaccessible. Consequently, the unadjusted Langevin algorithm is often used for sampling using estimated score functions. The lack of an energy function then prevents the application of the Metropolis-adjusted Langevin algorithm and other Metropolis-Hastings methods, limiting the wealth of other algorithms developed that use acceptance functions. We address this limitation by introducing a new loss function based on the \emph{detailed balance condition}, allowing the estimation of the Metropolis-Hastings acceptance probabilities given a learned score function. We demonstrate the effectiveness of the proposed method for various scenarios, including sampling from heavy-tail distributions.
翻译:本文提出了一种将基于分数的模型与Metropolis-Hastings算法相结合的新方法。传统的基于分数的扩散模型虽然能够从数据点中精确学习得分函数,但缺乏能量函数,导致无法进行Metropolis-Hastings调整步骤。因此,在使用估计的得分函数进行采样时,通常采用未调整的Langevin算法。能量函数的缺失使得Metropolis调整Langevin算法及其他Metropolis-Hastings方法无法应用,从而限制了众多使用接受函数的算法的发展。为解决这一局限,我们基于\emph{细致平衡条件}提出了一种新的损失函数,使得在给定已学习得分函数的情况下能够估计Metropolis-Hastings接受概率。我们通过多种场景(包括从重尾分布中采样)验证了所提方法的有效性。