In this paper, we address the challenge of Markov Chain Monte Carlo algorithms within the Approximate Bayesian Computation framework, which often get trapped in local optima due to their inherent local exploration mechanism. We propose a novel Global-Local ABC-MCMC algorithm that combines the "exploration" capabilities of global proposals with the "exploitation" finesse of local proposals. By integrating iterative importance resampling into the likelihood-free framework, we establish an effective global proposal distribution. We select the optimum mixture of global and local moves based on a unit cost version of expected squared jumped distance via sequential optimization. Furthermore, we propose two adaptive schemes: The first involves a normalizing flow-based probabilistic distribution learning model to iteratively improve the proposal for importance sampling, and the second focuses on optimizing the efficiency of the local sampler by utilizing Langevin dynamics and common random numbers. We numerically demonstrate that our method improves sampling efficiency and achieve more reliable convergence for complex posteriors. A software package implementing this method is available at https://github.com/caofff/GL-ABC-MCMC.
翻译:本文针对近似贝叶斯计算框架下的马尔可夫链蒙特卡洛算法常因固有的局部探索机制而陷入局部最优的难题,提出了一种新颖的全局-局部ABC-MCMC算法。该算法融合了全局提议的“探索”能力与局部提议的“利用”优势。通过在免似然框架中引入迭代重要性重采样,我们构建了有效的全局提议分布。基于单位成本版本的期望平方跳跃距离,我们通过序列优化方法选取全局与局部移动的最优混合比例。此外,我们提出了两种自适应方案:第一种采用基于标准化流的概率分布学习模型来迭代改进重要性采样的提议分布;第二种则聚焦于利用朗之万动力学与公共随机数来优化局部采样器的效率。数值实验表明,我们的方法提升了采样效率,并对复杂后验分布实现了更可靠的收敛。该方法的软件包可在 https://github.com/caofff/GL-ABC-MCMC 获取。