Gaussian Process Latent Variable Models (GPLVMs) have become increasingly popular for unsupervised tasks such as dimensionality reduction and missing data recovery due to their flexibility and non-linear nature. An importance-weighted version of the Bayesian GPLVMs has been proposed to obtain a tighter variational bound. However, this version of the approach is primarily limited to analyzing simple data structures, as the generation of an effective proposal distribution can become quite challenging in high-dimensional spaces or with complex data sets. In this work, we propose an Annealed Importance Sampling (AIS) approach to address these issues. By transforming the posterior into a sequence of intermediate distributions using annealing, we combine the strengths of Sequential Monte Carlo samplers and VI to explore a wider range of posterior distributions and gradually approach the target distribution. We further propose an efficient algorithm by reparameterizing all variables in the evidence lower bound (ELBO). Experimental results on both toy and image datasets demonstrate that our method outperforms state-of-the-art methods in terms of tighter variational bounds, higher log-likelihoods, and more robust convergence.
翻译:高斯过程隐变量模型(GPLVMs)因其灵活性和非线性特性,在降维和缺失数据恢复等无监督任务中日益受到关注。为获得更紧致的变分下界,已有研究提出了贝叶斯GPLVMs的重要性加权版本。然而,该方法主要局限于分析简单的数据结构,因为在处理高维空间或复杂数据集时,生成有效的建议分布变得极具挑战性。本研究提出一种退火重要性采样(AIS)方法来解决这些问题。通过退火技术将后验分布转化为一系列中间分布,我们结合了序贯蒙特卡罗采样器与变分推断的优势,以探索更广泛的后验分布空间并逐步逼近目标分布。我们进一步通过对证据下界(ELBO)中所有变量进行重参数化,提出了一种高效算法。在玩具数据集和图像数据集上的实验结果表明,本方法在变分下界紧致性、对数似然值提升和收敛鲁棒性方面均优于现有先进方法。