Sequential neural posterior estimation (SNPE) techniques have been recently proposed for dealing with simulation-based models with intractable likelihoods. Unlike approximate Bayesian computation, SNPE techniques learn the posterior from sequential simulation using neural network-based conditional density estimators by minimizing a specific loss function. The SNPE method proposed by Lueckmann et al. (2017) used a calibration kernel to boost the sample weights around the observed data, resulting in a concentrated loss function. However, the use of calibration kernels may increase the variances of both the empirical loss and its gradient, making the training inefficient. To improve the stability of SNPE, this paper proposes to use an adaptive calibration kernel and several variance reduction techniques. The proposed method greatly speeds up the process of training and provides a better approximation of the posterior than the original SNPE method and some existing competitors as confirmed by numerical experiments. We also manage to demonstrate the superiority of the proposed method for a high-dimensional model with real-world dataset.
翻译:序列神经后验估计(SNPE)技术是近年来提出的用于处理具有难处理似然的基于仿真的模型的方法。与近似贝叶斯计算不同,SNPE技术通过最小化特定损失函数,利用基于神经网络的条件密度估计器从序列仿真中学习后验分布。Lueckmann等人(2017)提出的SNPE方法使用校准核来增强观测数据周围的样本权重,从而得到一个集中的损失函数。然而,使用校准核可能会增加经验损失及其梯度的方差,导致训练效率低下。为了提高SNPE的稳定性,本文提出使用自适应校准核和多种方差缩减技术。数值实验证实,所提方法大大加快了训练过程,并且相较于原始SNPE方法及一些现有竞争方法,能提供更好的后验近似。我们还成功在一个使用真实世界数据集的高维模型上证明了所提方法的优越性。