We propose a method to improve the efficiency and accuracy of amortized Bayesian inference by leveraging universal symmetries in the joint probabilistic model of parameters and data. In a nutshell, we invert Bayes' theorem and estimate the marginal likelihood based on approximate representations of the joint model. Upon perfect approximation, the marginal likelihood is constant across all parameter values by definition. However, errors in approximate inference lead to undesirable variance in the marginal likelihood estimates across different parameter values. We penalize violations of this symmetry with a \textit{self-consistency loss} which significantly improves the quality of approximate inference in low data regimes and can be used to augment the training of popular neural density estimators. We apply our method to a number of synthetic problems and realistic scientific models, discovering notable advantages in the context of both neural posterior and likelihood approximation.
翻译:本文提出一种方法,通过利用参数与数据联合概率模型中的普适对称性,提升摊销贝叶斯推断的效率和精度。其核心在于反转贝叶斯定理,基于联合模型的近似表示来估计边缘似然。在完美近似的情况下,根据定义边缘似然应在所有参数值上保持恒定。然而,近似推断中的误差会导致不同参数值对应的边缘似然估计出现非期望的方差。我们通过引入\textit{自洽性损失}来惩罚对此对称性的违背,该损失函数能显著提升低数据量场景下近似推断的质量,并可用于增强主流神经密度估计器的训练。我们将该方法应用于若干合成问题及现实科学模型,在神经后验近似与似然近似场景中均发现了显著优势。