Markov chain Monte Carlo methods have become popular in statistics as versatile techniques to sample from complicated probability distributions. In this work, we propose a method to parameterize and train transition kernels of Markov chains to achieve efficient sampling and good mixing. This training procedure minimizes the total variation distance between the stationary distribution of the chain and the empirical distribution of the data. Our approach leverages involutive Metropolis-Hastings kernels constructed from reversible neural networks that ensure detailed balance by construction. We find that reversibility also implies $C_2$-equivariance of the discriminator function which can be used to restrict its function space.
翻译:马尔可夫链蒙特卡洛方法因其能够从复杂概率分布中采样的通用性,已在统计学中得到广泛应用。本文提出一种参数化并训练马尔可夫链转移核的方法,以实现高效采样与良好混合性。该训练过程通过最小化链的平稳分布与数据经验分布之间的全变差距离来优化。我们的方法利用由可逆神经网络构建的对合Metropolis-Hastings核,其结构本身保证了细致平衡条件。研究发现可逆性同时意味着判别器函数具有$C_2$-等变性,该性质可用于约束其函数空间。