We develop a unified framework for Bayesian hypothesis testing through the theory of moderate deviations, providing explicit asymptotic expansions for Bayes risk and optimal test statistics. Our analysis reveals that Bayesian test cutoffs operate on the moderate deviation scale $\sqrt{\log n/n}$, in sharp contrast to the sample-size-invariant calibrations of classical testing. This fundamental difference explains the Lindley paradox and establishes the risk-theoretic superiority of Bayesian procedures over fixed-$α$ Neyman-Pearson tests. We extend the seminal Rubin (1965) program to contemporary settings including high-dimensional sparse inference, goodness-of-fit testing, and model selection. The framework unifies several classical results: Jeffreys' $\sqrt{\log n}$ threshold, the BIC penalty $(d/2)\log n$, and the Chernoff-Stein error exponents all emerge naturally from moderate deviation analysis of Bayes risk. Our results provide theoretical foundations for adaptive significance levels and connect Bayesian testing to information theory through gambling-based interpretations.
翻译:我们通过中偏差理论构建了一个统一的贝叶斯假设检验框架,为贝叶斯风险与最优检验统计量提供了显式渐近展开。分析表明,贝叶斯检验阈值在中偏差尺度 $\sqrt{\log n/n}$ 上运作,这与经典检验中样本量不变的校准方式形成鲜明对比。这一根本差异解释了林德利悖论,并确立了贝叶斯方法相对于固定$α$奈曼-皮尔逊检验的风险理论优越性。我们将鲁宾(1965)的开创性研究计划拓展至当代场景,包括高维稀疏推断、拟合优度检验和模型选择。该框架统一了若干经典结果:杰弗里斯的 $\sqrt{\log n}$ 阈值、BIC惩罚项 $(d/2)\log n$ 以及切尔诺夫-斯坦误差指数,均自然地从贝叶斯风险的中偏差分析中涌现。我们的研究结果为自适应显著性水平奠定了理论基础,并通过博弈论阐释建立了贝叶斯检验与信息论的内在联系。