Bayesian neural network (BNN) approximates the posterior distribution of model parameters and utilizes the posterior for prediction via Bayesian Model Averaging (BMA). The quality of the posterior approximation is critical for achieving accurate and robust predictions. It is known that flatness in the loss landscape is strongly associated with generalization performance, and it necessitates consideration to improve the quality of the posterior approximation. In this work, we empirically demonstrate that BNNs often struggle to capture the flatness. Moreover, we provide both experimental and theoretical evidence showing that BMA can be ineffective without ensuring flatness. To address this, we propose Sharpness-Aware Bayesian Model Averaging (SA-BMA), a novel optimizer that seeks flat posteriors by calculating divergence in the parameter space. SA-BMA aligns with the intrinsic nature of BNN and the generalized version of existing sharpness-aware optimizers for DNN. In addition, we suggest a Bayesian Transfer Learning scheme to efficiently leverage pre-trained DNN. We validate the efficacy of SA-BMA in enhancing generalization performance in few-shot classification and distribution shift by ensuring flat posterior.
翻译:贝叶斯神经网络通过近似模型参数的后验分布,并利用贝叶斯模型平均进行预测。后验近似的质量对于实现准确且鲁棒的预测至关重要。众所周知,损失景观的平坦性与泛化性能密切相关,因此必须考虑如何提高后验近似的质量。本工作中,我们通过实证表明贝叶斯神经网络往往难以捕捉平坦性。此外,我们提供了实验和理论证据,证明若不确保平坦性,贝叶斯模型平均可能失效。为此,我们提出锐度感知贝叶斯模型平均——一种通过在参数空间中计算散度来寻求平坦后验的新型优化器。该优化器符合贝叶斯神经网络的内在特性,是现有深度神经网络锐度感知优化器的广义版本。此外,我们提出贝叶斯迁移学习方案以高效利用预训练的深度神经网络。我们通过确保平坦后验,在少样本分类和分布偏移场景中验证了锐度感知贝叶斯模型平均在提升泛化性能方面的有效性。