Sequential Monte Carlo (SMC) methods are powerful tools for Bayesian inference but suffer from requiring many particles for accurate estimates, leading to high computational costs. We introduce persistent sampling (PS), an extension of SMC that mitigates this issue by allowing particles from previous iterations to persist. This generates a growing, weighted ensemble of particles distributed across iterations. In each iteration, PS utilizes multiple importance sampling and resampling from the mixture of all previous distributions to produce the next generation of particles. This addresses particle impoverishment and mode collapse, resulting in more accurate posterior approximations. Furthermore, this approach provides lower-variance marginal likelihood estimates for model comparison. Additionally, the persistent particles improve transition kernel adaptation for efficient exploration. Experiments on complex distributions show that PS consistently outperforms standard methods, achieving lower squared bias in posterior moment estimation and significantly reduced marginal likelihood errors, all at a lower computational cost. PS offers a robust, efficient, and scalable framework for Bayesian inference.
翻译:序贯蒙特卡洛(SMC)方法是贝叶斯推断的强大工具,但其需要大量粒子以获得准确估计,导致计算成本高昂。本文提出持续采样(PS),作为SMC的一种扩展方法,通过允许先前迭代的粒子持续存在来缓解这一问题。该方法生成一个跨迭代分布、不断增长且加权的粒子集合。在每次迭代中,PS利用多重重要性采样,并从所有先前分布的混合中进行重采样,以产生下一代粒子。这解决了粒子贫化和模态坍缩问题,从而得到更准确的后验近似。此外,该方法为模型比较提供了更低方差的边缘似然估计。同时,持续存在的粒子改进了转移核的自适应调整,以实现高效探索。在复杂分布上的实验表明,PS始终优于标准方法,在后验矩估计中实现了更低的平方偏差,并显著降低了边缘似然误差,且均以更低的计算成本达成。PS为贝叶斯推断提供了一个鲁棒、高效且可扩展的框架。