When prior information is lacking, the go-to strategy for probabilistic inference is to combine a "default prior" and the likelihood via Bayes's theorem. Objective Bayes, (generalized) fiducial inference, etc. fall under this umbrella. This construction is natural, but the corresponding posterior distributions generally only offer limited, approximately valid uncertainty quantification. The present paper takes a reimagined approach offering posterior distributions with stronger reliability properties. The proposed construction starts with an inferential model (IM), one that takes the mathematical form of a data-driven possibility measure and features exactly valid uncertainty quantification, and then returns a so-called inner probabilistic approximation thereof. This inner probabilistic approximation inherits many of the original IM's desirable properties, including credible sets with exact coverage and asymptotic efficiency. The approximation also agrees with the familiar Bayes/fiducial solution obtained in applications where the model has a group transformation structure. A Monte Carlo method for evaluating the probabilistic approximation is presented, along with numerical illustrations.
翻译:当先验信息缺失时,概率推断的常规策略是通过贝叶斯定理将“默认先验”与似然函数相结合。目标贝叶斯、(广义) 枢轴推断等方法均属于此类范式。该构建方式虽自然,但相应的后验分布通常只能提供有限且近似有效的不确定性量化。本文提出一种再构想方法,旨在构建具有更强可靠性性质的后验分布。所提出的构建方法始于一个推断模型——其数学形式表现为数据驱动的可能性测度,并具备严格有效的不确定性量化特性,进而导出其所谓的内部概率近似。该内部概率近似继承了原始推断模型的诸多优良性质,包括具有精确覆盖度的可信集与渐近有效性。在具有群变换结构的模型应用中,该近似解也与常见的贝叶斯/枢轴解保持一致。本文同时提出了评估该概率近似的蒙特卡洛方法,并辅以数值示例加以说明。