Neural networks are the cornerstone of modern machine learning, yet can be difficult to interpret, give overconfident predictions and are vulnerable to adversarial attacks. Bayesian neural networks (BNNs) provide some alleviation of these limitations, but have problems of their own. The key step of specifying prior distributions in BNNs is no trivial task, yet is often skipped out of convenience. In this work, we propose a new class of prior distributions for BNNs, the Dirichlet scale mixture (DSM) prior, that addresses current limitations in Bayesian neural networks through structured, sparsity-inducing shrinkage. Theoretically, we derive general dependence structures and shrinkage results for DSM priors and show how they manifest under the geometry induced by neural networks. In experiments on simulated and real world data we find that the DSM priors encourages sparse networks through implicit feature selection, show robustness under adversarial attacks and deliver competitive predictive performance with substantially fewer effective parameters. In particular, their advantages appear most pronounced in correlated, moderately small data regimes, and are more amenable to weight pruning. Moreover, by adopting heavy-tailed shrinkage mechanisms, our approach aligns with recent findings that such priors can mitigate the cold posterior effect, offering a principled alternative to the commonly used Gaussian priors.
翻译:神经网络是现代机器学习的基石,但其可解释性较差、易产生过度自信的预测且易受对抗性攻击。贝叶斯神经网络(BNNs)虽能缓解这些局限,但其自身也存在问题。为BNNs设定先验分布这一关键步骤并非易事,却常因便利性而被忽略。本研究提出一类新的BNNs先验分布——狄利克雷尺度混合(DSM)先验,该先验通过结构化、稀疏诱导的收缩机制应对当前贝叶斯神经网络的局限性。理论上,我们推导出DSM先验的通用依赖结构与收缩特性,并阐明其在神经网络几何结构下的表现形式。在模拟数据与真实数据的实验中,我们发现DSM先验能通过隐式特征选择促进网络稀疏性,在对抗攻击下保持鲁棒性,并以显著更少的有效参数实现具有竞争力的预测性能。特别地,其优势在相关性较强、规模适中的小数据场景中最为显著,且更适用于权重剪枝。此外,通过采用重尾收缩机制,我们的方法与近期研究结论——此类先验可缓解冷后验效应——相契合,为常用的高斯先验提供了理论依据更充分的替代方案。