Markov chain Monte Carlo (MCMC) algorithms are based on the construction of a Markov chain with transition probabilities leaving invariant a probability distribution of interest. In this work, we look at these transition probabilities as functions of their invariant distributions, and we develop a notion of derivative in the invariant distribution of a MCMC kernel. We build around this concept a set of tools that we refer to as Markov chain Monte Carlo Calculus. This allows us to compare Markov chains with different invariant distributions within a suitable class via what we refer to as mean value inequalities. We explain how MCMC Calculus provides a natural framework to study algorithms using an approximation of an invariant distribution, and we illustrate this by using the tools developed to prove convergence of interacting and sequential MCMC algorithms. Finally, we discuss how similar ideas can be used in other frameworks.
翻译:马尔可夫链蒙特卡洛(MCMC)算法基于构建马尔可夫链,其转移概率使目标概率分布保持不变。本研究中,我们将这些转移概率视为其不变分布的函数,并针对MCMC核的不变分布建立了导数的概念。围绕这一概念,我们构建了一套称为“马尔可夫链蒙特卡洛演算”的工具集。通过我们提出的均值不等式,这使我们能够在适当类别内比较具有不同不变分布的马尔可夫链。我们阐释了MCMC演算如何为研究使用不变分布近似的算法提供自然框架,并通过所开发的工具证明了交互式与序列式MCMC算法的收敛性。最后,我们讨论了类似思想在其他框架中的应用可能性。