Bayesian deep learning (BDL) has emerged as a principled approach to produce reliable uncertainty estimates by integrating deep neural networks with Bayesian inference, and the selection of informative prior distributions remains a significant challenge. Various function-space variational inference (FSVI) regularisation methods have been presented, assigning meaningful priors over model predictions. However, these methods typically rely on a Gaussian prior, which fails to capture the heavy-tailed statistical characteristics inherent in neural network outputs. By contrast, this work proposes a novel function-space empirical Bayes regularisation framework -- termed ST-FS-EB -- which employs heavy-tailed Student's $t$ priors in both parameter and function spaces. Also, we approximate the posterior distribution through variational inference (VI), inducing an evidence lower bound (ELBO) objective based on Monte Carlo (MC) dropout. Furthermore, the proposed method is evaluated against various VI-based BDL baselines, and the results demonstrate its robust performance in in-distribution prediction, out-of-distribution (OOD) detection and handling distribution shifts.
翻译:贝叶斯深度学习(BDL)作为一种将深度神经网络与贝叶斯推断相结合的原则性方法,已成为生成可靠不确定性估计的重要途径,而信息性先验分布的选择仍是一个关键挑战。现有研究提出了多种函数空间变分推断(FSVI)正则化方法,为模型预测赋予有意义的先验分布。然而,这些方法通常依赖于高斯先验,无法捕捉神经网络输出固有的重尾统计特性。相比之下,本研究提出了一种新颖的函数空间经验贝叶斯正则化框架——称为ST-FS-EB——该框架在参数空间和函数空间均采用重尾Student's $t$先验。同时,我们通过变分推断(VI)近似后验分布,推导出基于蒙特卡洛(MC)dropout的证据下界(ELBO)目标函数。此外,所提方法在多种基于VI的BDL基线模型上进行了评估,结果表明其在分布内预测、分布外(OOD)检测以及处理分布偏移方面均表现出鲁棒性能。