Bayesian Neural Networks (BNNs) offer a principled and natural framework for proper uncertainty quantification in the context of deep learning. They address the typical challenges associated with conventional deep learning methods, such as data insatiability, ad-hoc nature, and susceptibility to overfitting. However, their implementation typically either relies on Markov chain Monte Carlo (MCMC) methods, which are characterized by their computational intensity and inefficiency in a high-dimensional space, or variational inference methods, which tend to underestimate uncertainty. To address this issue, we propose a novel Calibration-Emulation-Sampling (CES) strategy to significantly enhance the computational efficiency of BNN. In this framework, during the initial calibration stage, we collect a small set of samples from the parameter space. These samples serve as training data for the emulator, which approximates the map between parameters and posterior probability. The trained emulator is then used for sampling from the posterior distribution at substantially higher speed compared to the standard BNN. Using simulated and real data, we demonstrate that our proposed method improves computational efficiency of BNN, while maintaining similar performance in terms of prediction accuracy and uncertainty quantification.
翻译:贝叶斯神经网络(BNNs)为深度学习中的不确定性量化提供了一个原则性且自然的框架。它们解决了传统深度学习方法所面临的典型挑战,如数据不满足性、临时性以及易过拟合等问题。然而,其实现通常依赖于马尔可夫链蒙特卡洛(MCMC)方法(该方法在高维空间中具有计算密集和效率低下的特点)或变分推理方法(该方法往往低估不确定性)。为解决这一问题,我们提出了一种新颖的校准-仿真-采样(CES)策略,以显著提升BNN的计算效率。在此框架中,初始校准阶段从参数空间收集少量样本。这些样本作为仿真器的训练数据,用于近似参数与后验概率之间的映射关系。训练后的仿真器随后用于从后验分布中采样,其速度相较于标准BNN有显著提升。通过模拟数据和真实数据的实验,我们证明所提出的方法在保持预测精度和不确定性量化性能相近的同时,提高了BNN的计算效率。