We propose a new Bayesian strategy for adaptation to smoothness in nonparametric models based on heavy tailed series priors. We illustrate it in a variety of settings, showing in particular that the corresponding Bayesian posterior distributions achieve adaptive rates of contraction in the minimax sense (up to logarithmic factors) without the need to sample hyperparameters. Unlike many existing procedures, where a form of direct model (or estimator) selection is performed, the method can be seen as performing a soft selection through the prior tail. In Gaussian regression, such heavy tailed priors are shown to lead to (near-)optimal simultaneous adaptation both in the $L^2$- and $L^\infty$-sense. Results are also derived for linear inverse problems, for anisotropic Besov classes, and for certain losses in more general models through the use of tempered posterior distributions. We present numerical simulations corroborating the theory.
翻译:我们提出了一种基于重尾级数先验的贝叶斯策略,用于非参数模型中的平滑性自适应。我们在多种设定下展示了该方法,特别证明了对应的贝叶斯后验分布能够达到极小极大意义下的自适应收缩速率(至多包含对数因子),且无需对超参数进行采样。与许多需要执行某种形式的直接模型(或估计量)选择的现有方法不同,本方法可视为通过先验尾部进行软选择。在高斯回归中,此类重尾先验被证明能在$L^2$范数和$L^\infty$范数意义下实现(近似)最优的同步自适应。通过使用调和后验分布,我们还推导了线性反问题、各向异性Besov类以及更一般模型中特定损失函数的相关结果。我们提供了数值模拟以佐证理论。