Approximate Bayesian computation (ABC) is the most popular approach to inferring parameters in the case where the data model is specified in the form of a simulator. It is not possible to directly implement standard Monte Carlo methods for inference in such a model, due to the likelihood not being available to evaluate pointwise. The main idea of ABC is to perform inference on an alternative model with an approximate likelihood (the ABC likelihood), estimated at each iteration from points simulated from the data model. The central challenge of ABC is then to trade-off bias (introduced by approximating the model) with the variance introduced by estimating the ABC likelihood. Stabilising the variance of the ABC likelihood requires a computational cost that is exponential in the dimension of the data, thus the most common approach to reducing variance is to perform inference conditional on summary statistics. In this paper we introduce a new approach to estimating the ABC likelihood: using iterative ensemble Kalman inversion (IEnKI) (Iglesias, 2016; Iglesias et al., 2018). We first introduce new estimators of the marginal likelihood in the case of a Gaussian data model using the IEnKI output, then show how this may be used in ABC. Performance is illustrated on the Lotka-Volterra model, where we observe substantial improvements over standard ABC and other commonly-used approaches.
翻译:近似贝叶斯计算(ABC)是在数据模型以仿真器形式指定时推断参数的最常用方法。由于似然函数无法逐点评估,在此类模型中无法直接实施标准的蒙特卡洛推断方法。ABC的核心思想是在一个具有近似似然(即ABC似然)的替代模型上进行推断,该似然在每次迭代中通过从数据模型模拟的样本点进行估计。ABC的关键挑战在于权衡模型近似引入的偏差与ABC似然估计引入的方差。为稳定ABC似然的方差,其计算成本随数据维度呈指数增长,因此降低方差最常用的方法是在汇总统计量条件下进行推断。本文提出一种估计ABC似然的新方法:使用迭代集成卡尔曼反演(IEnKI)(Iglesias, 2016; Iglesias et al., 2018)。我们首先基于IEnKI输出,针对高斯数据模型提出新的边缘似然估计量,进而展示其在ABC中的应用。通过Lotka-Volterra模型的实验验证,该方法相较标准ABC及其他常用方法取得了显著改进。