Expectation propagation (EP) is a family of algorithms for performing approximate inference in probabilistic models. The updates of EP involve the evaluation of moments -- expectations of certain functions -- which can be estimated from Monte Carlo (MC) samples. However, the updates are not robust to MC noise when performed naively, and various prior works have attempted to address this issue in different ways. In this work, we provide a novel perspective on the moment-matching updates of EP; namely, that they perform natural-gradient-based optimisation of a variational objective. We use this insight to motivate two new EP variants, with updates that are particularly well-suited to MC estimation. They remain stable and are most sample-efficient when estimated with just a single sample. These new variants combine the benefits of their predecessors and address key weaknesses. In particular, they are easier to tune, offer an improved speed-accuracy trade-off, and do not rely on the use of debiasing estimators. We demonstrate their efficacy on a variety of probabilistic inference tasks.
翻译:期望传播(EP)是一类用于在概率模型中执行近似推断的算法。EP的更新涉及矩的评估——即特定函数的期望——这些矩可以通过蒙特卡洛(MC)采样进行估计。然而,若直接执行更新,其对MC噪声并不鲁棒,已有多种先前工作尝试以不同方式解决此问题。在本研究中,我们为EP的矩匹配更新提供了一个新颖的视角:即它们执行的是基于自然梯度的变分目标优化。我们利用这一见解推导出两种新的EP变体,其更新特别适用于MC估计。这些变体在仅使用单样本估计时仍能保持稳定,并实现最佳的样本效率。这些新变体结合了先前方法的优点,并解决了关键缺陷。具体而言,它们更易于调参,提供了改进的速度-精度权衡,且不依赖于去偏估计器的使用。我们在多种概率推断任务上验证了其有效性。