Neural posterior estimation (NPE) and neural likelihood estimation (NLE) are machine learning approaches that provide accurate posterior, and likelihood, approximations in complex modeling scenarios, and in situations where conducting amortized inference is a necessity. While such methods have shown significant promise across a range of diverse scientific applications, the statistical accuracy of these methods is so far unexplored. In this manuscript, we give, for the first time, an in-depth exploration on the statistical behavior of NPE and NLE. We prove that these methods have similar theoretical guarantees to common statistical methods like approximate Bayesian computation (ABC) and Bayesian synthetic likelihood (BSL). While NPE and NLE methods are just as accurate as ABC and BSL, we prove that this accuracy can often be achieved at a vastly reduced computational cost, and will therefore deliver more attractive approximations than ABC and BSL in certain problems. We verify our results theoretically and in several examples from the literature.
翻译:神经后验估计(NPE)与神经似然估计(NLE)是机器学习方法,可在复杂建模场景以及必须进行摊销推断的情况下提供精确的后验与似然近似。尽管此类方法已在多种科学应用中展现出巨大潜力,但其统计准确性迄今尚未得到深入探究。本文首次对NPE与NLE的统计特性进行系统研究。我们证明这些方法具有与近似贝叶斯计算(ABC)和贝叶斯合成似然(BSL)等经典统计方法相似的理论保证。虽然NPE和NLE方法的准确性与ABC和BSL相当,但我们证明其往往能以显著降低的计算成本实现同等精度,因此在特定问题中将提供比ABC和BSL更具吸引力的近似解。我们通过理论推导及多个文献案例验证了上述结论。