Hamiltonian Monte Carlo (HMC) algorithms are among the most widely used sampling methods in high dimensional settings, yet their convergence properties are poorly understood in divergences that quantify relative density mismatch, such as Kullback-Leibler (KL) and Rényi divergences. These divergences naturally govern acceptance probabilities and warm-start requirements for Metropolis-adjusted Markov chains. In this work, we develop a framework for upgrading Wasserstein convergence guarantees for unadjusted Hamiltonian Monte Carlo (uHMC) to guarantees in tail-sensitive KL and Rényi divergences. Our approach is based on one-shot couplings, which we use to establish a regularization property of the uHMC transition kernel. This regularization allows Wasserstein-2 mixing-time and asymptotic bias bounds to be lifted to KL divergence, and analogous Orlicz-Wasserstein bounds to be lifted to Rényi divergence, paralleling earlier work of Bou-Rabee and Eberle (2023) that upgrade Wasserstein-1 bounds to total variation distance via kernel smoothing. As a consequence, our results provide quantitative control of relative density mismatch, clarify the role of discretization bias in strong divergences, and yield principled guarantees relevant both for unadjusted sampling and for generating warm starts for Metropolis-adjusted Markov chains.
翻译:哈密顿蒙特卡洛(HMC)算法是高维场景中最广泛使用的采样方法之一,然而其在量化相对密度失配的散度(如Kullback-Leibler(KL)散度和Rényi散度)中的收敛性质尚不明确。这些散度天然地控制着Metropolis调整马尔可夫链的接受概率与热启动要求。本研究构建了一个理论框架,将未调整哈密顿蒙特卡洛(uHMC)的Wasserstein收敛保证提升至尾部敏感的KL与Rényi散度保证。我们的方法基于单次耦合技术,借此建立了uHMC转移核的正则化特性。该正则化使得Wasserstein-2混合时间与渐近偏差界可提升至KL散度,而类似的Orlicz-Wasserstein界可提升至Rényi散度,这延续了Bou-Rabee与Eberle(2023)通过核平滑将Wasserstein-1界提升至全变差距离的研究思路。由此,我们的结果提供了相对密度失配的量化控制,阐明了离散化偏差在强散度中的作用,并为未调整采样及Metropolis调整马尔可夫链的热启动生成提供了理论依据。