Vanilla variational inference finds an optimal approximation to the Bayesian posterior distribution, but even the exact Bayesian posterior is often not meaningful under model misspecification. We propose predictive variational inference (PVI): a general inference framework that seeks and samples from an optimal posterior density such that the resulting posterior predictive distribution is as close to the true data generating process as possible, while this this closeness is measured by multiple scoring rules. By optimizing the objective, the predictive variational inference is generally not the same as, or even attempting to approximate, the Bayesian posterior, even asymptotically. Rather, we interpret it as implicit hierarchical expansion. Further, the learned posterior uncertainty detects heterogeneity of parameters among the population, enabling automatic model diagnosis. This framework applies to both likelihood-exact and likelihood-free models. We demonstrate its application in real data examples.
翻译:传统的变分推断旨在寻找对贝叶斯后验分布的最优近似,但即使在模型设定错误的情况下,精确的贝叶斯后验本身也常常缺乏实际意义。我们提出预测性变分推断(PVI):一种通用的推断框架,该框架通过多重评分规则度量后验预测分布与真实数据生成过程之间的接近程度,从而寻找并采样自能使后验预测分布尽可能接近真实数据生成过程的最优后验密度。通过优化该目标,预测性变分推断通常不同于甚至不试图近似贝叶斯后验,即使在渐近意义上也是如此。相反,我们将其解释为隐式的层次扩展。此外,学习得到的后验不确定性能够检测参数在总体中的异质性,从而实现自动模型诊断。该框架适用于似然精确模型与无似然模型。我们通过实际数据案例展示了其应用。