Deep generative models complement Markov-chain-Monte-Carlo methods for efficiently sampling from high-dimensional distributions. Among these methods, explicit generators, such as Normalising Flows (NFs), in combination with the Metropolis Hastings algorithm have been extensively applied to get unbiased samples from target distributions. We systematically study central problems in conditional NFs, such as high variance, mode collapse and data efficiency. We propose adversarial training for NFs to ameliorate these problems. Experiments are conducted with low-dimensional synthetic datasets and XY spin models in two spatial dimensions.
翻译:摘要:深度生成模型对马尔可夫链蒙特卡洛方法进行补充,用于高效地从高维分布中采样。在这些方法中,显式生成器(如归一化流)与Metropolis Hastings算法相结合,已被广泛用于从目标分布中获取无偏样本。我们系统研究了条件归一化流中的核心问题,例如高方差、模式坍塌和数据效率。我们提出针对归一化流的对抗训练方法以改善这些问题。实验采用低维合成数据集以及二维空间中的XY自旋模型进行。