Models are often misspecified in practice, making model criticism a key part of Bayesian analysis. It is important to detect not only when a model is wrong, but which aspects are wrong, and to do so in a computationally convenient and statistically rigorous way. We introduce a novel method for model criticism based on the fact that if the parameters are drawn from the prior, and the dataset is generated according to the assumed likelihood, then a sample from the posterior will be distributed according to the prior. Thus, departures from the assumed likelihood or prior can be detected by testing whether a posterior sample could plausibly have been generated by the prior. Building upon this idea, we propose to reparametrize all random elements of the likelihood and prior in terms of independent uniform random variables, or u-values. This makes it possible to aggregate across arbitrary subsets of the u-values for data points and parameters to test for model departures using classical hypothesis tests for dependence or non-uniformity. We demonstrate empirically how this method of uniform parametrization checks (UPCs) facilitates model criticism in several examples, and we develop supporting theoretical results.
翻译:在实践中,模型常存在误设定问题,这使得模型批判成为贝叶斯分析的关键环节。不仅需要检测模型何时出错,还需识别具体哪些方面存在错误,同时要求计算方法便捷且统计过程严谨。我们提出一种新颖的模型批判方法,其理论基础在于:若参数从先验分布中抽取,且数据集根据假设似然生成,则后验样本的分布将符合先验分布。因此,通过检验后验样本是否可能由先验分布生成,即可检测假设似然或先验分布的偏离。基于此思想,我们提出将似然函数与先验分布中的所有随机元素重新参数化为独立均匀随机变量(即u值)。这使得能够跨数据点和参数的任意u值子集进行聚合,并利用经典的相依性或非均匀性假设检验来检测模型偏离。我们通过多个实例实证展示了这种均匀参数化检验(UPC)方法如何促进模型批判,并建立了相应的理论支撑结果。