Bayesian neural network (BNN) approximates the posterior distribution of model parameters and utilizes the posterior for prediction via Bayesian Model Averaging (BMA). The quality of the posterior approximation is critical for achieving accurate and robust predictions. It is known that flatness in the loss landscape is strongly associated with generalization performance, and it necessitates consideration to improve the quality of the posterior approximation. In this work, we empirically demonstrate that BNNs often struggle to capture the flatness. Moreover, we provide both experimental and theoretical evidence showing that BMA can be ineffective without ensuring flatness. To address this, we propose Sharpness-Aware Bayesian Model Averaging (SA-BMA), a novel optimizer that seeks flat posteriors by calculating divergence in the parameter space. SA-BMA aligns with the intrinsic nature of BNN and the generalized version of existing sharpness-aware optimizers for DNN. In addition, we suggest a Bayesian Transfer Learning scheme to efficiently leverage pre-trained DNN. We validate the efficacy of SA-BMA in enhancing generalization performance in few-shot classification and distribution shift by ensuring flat posterior.
翻译:贝叶斯神经网络通过近似模型参数的后验分布,并利用贝叶斯模型平均进行预测。后验近似的质量对于实现准确且鲁棒的预测至关重要。已知损失景观的平坦性与泛化性能密切相关,因此必须考虑如何提升后验近似的质量。本研究发现贝叶斯神经网络往往难以捕捉平坦性,并通过实验与理论证明:若未确保平坦性,贝叶斯模型平均可能失效。为此,我们提出锐度感知贝叶斯模型平均——一种通过在参数空间计算散度来寻求平坦后验的新型优化器。该优化器契合贝叶斯神经网络的内在特性,并可视为现有深度神经网络锐度感知优化器的广义形式。此外,我们提出贝叶斯迁移学习方案以高效利用预训练深度神经网络。通过确保后验平坦性,我们验证了锐度感知贝叶斯模型平均在小样本分类和分布偏移场景中提升泛化性能的有效性。