When prior information is lacking, the go-to strategy for probabilistic inference is to combine a "default prior" and the likelihood via Bayes's theorem. Objective Bayes, (generalized) fiducial inference, etc. fall under this umbrella. This construction is natural, but the corresponding posterior distributions generally only offer limited, approximately valid uncertainty quantification. The present paper takes a reimagined approach that yields posterior distributions with stronger reliability properties. The proposed construction starts with an inferential model (IM), one that takes the mathematical form of a data-driven possibility measure and features exactly valid uncertainty quantification, and then returns a so-called inner probabilistic approximation thereof. This inner probabilistic approximation inherits many of the original IM's desirable properties, including credible sets with exact coverage and asymptotic efficiency. The approximation also agrees with the familiar Bayes/fiducial solution in applications where the model has a group invariance structure. A Monte Carlo method for evaluating the probabilistic approximation is presented, along with numerical illustrations.
翻译:当缺乏先验信息时,概率推断的常用策略是通过贝叶斯定理将"默认先验"与似然函数相结合。客观贝叶斯、(广义) 枢轴推断等方法均属于此类范式。这种构建方式虽然自然,但相应的后验分布通常只能提供有限且近似有效的不确定性量化。本文提出一种重构方法,能够产生具有更强可靠性性质的后验分布。所提出的构建方法始于一个推断模型——该模型采用数据驱动的可能性测度数学形式,具备精确有效的不确定性量化特征——然后返回其所谓的内部概率近似。这种内部概率近似继承了原始推断模型的诸多优良特性,包括具有精确覆盖性和渐近有效性的可信集。在具有群不变性结构的模型应用中,该近似解与经典的贝叶斯/枢轴解保持一致。本文提出了评估概率近似的蒙特卡洛方法,并辅以数值算例说明。