Loss-based updating, including generalized Bayes, Gibbs, and quasi-posteriors, replaces likelihoods by a user-chosen loss and produces a posterior-like distribution via exponential tilt. We give a decision-theoretic characterization that separates \emph{belief posteriors} -- conditional beliefs justified by the foundations of Savage and Anscombe-Aumann under a joint probability mode l-- from \emph{decision posteriors} -- randomized decision rules justified by preferences over decision rules. We make explicit that a loss-based posterior coincides with ordinary Bayes if and only if the loss is, up to scale and a data-only term, negative log-likelihood. We then show that generalized marginal likelihood is not evidence for decision posteriors, and Bayes factors are not well-defined without additional structure. In the decision posterior regime, non-degenerate posteriors require nonlinear preferences over decision rules. Under sequential coherence and separability, these lead to an entropy-penalized variational representation yielding generalized Bayes as the optimal rule.
翻译:损失基更新(包括广义贝叶斯、吉布斯与拟后验方法)以用户选定的损失函数替代似然函数,并通过指数倾斜生成类后验分布。本文提出一种决策理论特征刻画,区分了两种后验:在联合概率模型框架下基于萨维奇与安斯科姆-奥曼公理体系建立的**信念后验**(条件信念),与基于决策规则偏好论证的**决策后验**(随机化决策规则)。我们明确指出:损失基后验与经典贝叶斯后验等价,当且仅当该损失函数(至多相差一个尺度因子与仅依赖数据的项)为负对数似然函数。进一步证明:对于决策后验,广义边际似然不构成证据支持,且若无附加结构,贝叶斯因子无法明确定义。在决策后验体系中,非退化后验要求对决策规则具有非线性偏好。在满足序贯协调性与可分离性条件下,这类偏好导出熵惩罚变分表示,从而推得广义贝叶斯更新成为最优决策规则。