Vanilla variational inference finds an optimal approximation to the Bayesian posterior distribution, but even the exact Bayesian posterior is often not meaningful under model misspecification. We propose predictive variational inference (PVI): a general inference framework that seeks and samples from an optimal posterior density such that the resulting posterior predictive distribution is as close to the true data generating process as possible, while this closeness is measured by multiple scoring rules. By optimizing the objective, the predictive variational inference is generally not the same as, or even attempting to approximate, the Bayesian posterior, even asymptotically. Rather, we interpret it as implicit hierarchical expansion. Further, the learned posterior uncertainty detects heterogeneity of parameters among the population, enabling automatic model diagnosis. This framework applies to both likelihood-exact and likelihood-free models. We demonstrate its application in real data examples.
翻译:传统变分推断寻求对贝叶斯后验分布的最优近似,但即使在模型误设情况下,精确的贝叶斯后验本身也常缺乏实际意义。本文提出预测变分推断(PVI):一种通用推断框架,旨在寻找并采样自最优后验密度,使得所得后验预测分布尽可能接近真实数据生成过程,且该接近程度通过多重评分规则度量。通过优化该目标,预测变分推断通常不同于——甚至不试图近似——贝叶斯后验,即使在渐近意义上。相反,我们将其解释为隐式层次扩展。此外,习得的后验不确定性能够检测参数在总体中的异质性,从而实现自动模型诊断。该框架适用于似然精确模型与无似然模型。我们在实际数据案例中展示了其应用。