We consider adaptive increasingly rare Markov chain Monte Carlo (MCMC) algorithms, which are adaptive MCMC methods, where the adaptation concerning the "past'' happens less and less frequently over time. Under a contraction assumption with respect to a Wasserstein-like function we deduce upper bounds of the convergence rate of Monte Carlo sums taking a renormalisation factor into account that is "almost'' the one that appears in a law of the iterated logarithm. We demonstrate the applicability of our results by considering different settings, among which are those of simultaneous geometric and uniform ergodicity. All proofs are carried out on an augmented state space, including the classical non-augmented setting as a special case. In contrast to other adaptive MCMC limit theory, some technical assumptions, like diminishing adaptation, are not needed.
翻译:我们研究自适应渐稀马尔可夫链蒙特卡洛(MCMC)算法,这类自适应MCMC方法中针对“历史”的调整会随时间推移越来越稀疏。在关于类Wasserstein函数的压缩性假设下,我们推导了蒙特卡洛求和收敛速率的上界,其中考虑了归一化因子——该因子“几乎”等同于迭代对数定律中出现的因子。我们通过考察不同设定(包括同时满足几何遍历与一致遍历的情形)来证明所获结果的适用性。所有证明均在增广状态空间上展开,经典非增广设定可作为其特例。与其他自适应MCMC极限理论相比,本文无需诸如衰减适应等特定技术假设。