The Gibbs sampler (a.k.a. Glauber dynamics and heat-bath algorithm) is a popular Markov Chain Monte Carlo algorithm which iteratively samples from the conditional distributions of a probability measure $π$ of interest. Under the assumption that $π$ is strongly log-concave, we show that the random scan Gibbs sampler contracts in relative entropy and provide a sharp characterization of the associated contraction rate. Assuming that evaluating conditionals is cheap compared to evaluating the joint density, our results imply that the number of full evaluations of $π$ needed for the Gibbs sampler to mix grows linearly with the condition number and is independent of the dimension. If $π$ is non-strongly log-concave, the convergence rate in entropy degrades from exponential to polynomial. Our techniques are versatile and extend to Metropolis-within-Gibbs schemes and the Hit-and-Run algorithm. A comparison with gradient-based schemes and the connection with the optimization literature are also discussed.
翻译:吉布斯采样器(亦称Glauber动力学与热浴算法)是一种常用的马尔可夫链蒙特卡罗算法,通过迭代地从目标概率测度$π$的条件分布中进行采样。在假设$π$为强对数凹的前提下,我们证明了随机扫描吉布斯采样器在相对熵意义下具有收缩性,并精确刻画了相应的收缩速率。若评估条件分布的计算成本远低于评估联合密度,我们的结果表明:吉布斯采样器达到混合所需对$π$的完整评估次数随条件数线性增长,且与维度无关。当$π$为非强对数凹时,熵收敛速率将从指数级退化为多项式级。本研究所用技术具有普适性,可扩展至Metropolis-within-Gibbs方案及Hit-and-Run算法。文中还讨论了与基于梯度的方案的比较,以及与优化文献的关联。