Neal (1996) proved that infinitely wide shallow Bayesian neural networks (BNN) converge to Gaussian processes (GP), when the network weights have bounded prior variance. Cho & Saul (2009) provided a useful recursive formula for deep kernel processes for relating the covariance kernel of each layer to the layer immediately below. Moreover, they worked out the form of the layer-wise covariance kernel in an explicit manner for several common activation functions. Recent works, including Aitchison et al. (2021), have highlighted that the covariance kernels obtained in this manner are deterministic and hence, precludes any possibility of representation learning, which amounts to learning a non-degenerate posterior of a random kernel given the data. To address this, they propose adding artificial noise to the kernel to retain stochasticity, and develop deep kernel inverse Wishart processes. Nonetheless, this artificial noise injection could be critiqued in that it would not naturally emerge in a classic BNN architecture under an infinite-width limit. To address this, we show that a Bayesian deep neural network, where each layer width approaches infinity, and all network weights are elliptically distributed with infinite variance, converges to a process with $\alpha$-stable marginals in each layer that has a conditionally Gaussian representation. These conditional random covariance kernels could be recursively linked in the manner of Cho & Saul (2009), even though marginally the process exhibits stable behavior, and hence covariances are not even necessarily defined. We also provide useful generalizations of the recent results of Lor\'ia & Bhadra (2024) on shallow networks to multi-layer networks, and remedy the computational burden of their approach. The computational and statistical benefits over competing approaches stand out in simulations and in demonstrations on benchmark data sets.
翻译:Neal (1996) 证明了当网络权重具有有界先验方差时,无限宽浅层贝叶斯神经网络 (BNN) 收敛于高斯过程 (GP)。Cho & Saul (2009) 为深度核过程提供了一个有用的递归公式,用于将每一层的协方差核与紧邻的下层关联起来。此外,他们针对几种常见的激活函数,以显式方式推导出了逐层协方差核的形式。包括 Aitchison 等人 (2021) 在内的近期研究指出,以此方式获得的协方差核是确定性的,因此排除了表示学习的任何可能性,而表示学习本质上等同于根据数据学习随机核的非退化后验。为解决此问题,他们提出向核添加人工噪声以保持随机性,并发展了深度核逆Wishart过程。然而,这种人工噪声注入可能受到批评,因为它不会在经典 BNN 架构的无限宽度极限下自然出现。为解决此问题,我们证明了一个贝叶斯深度神经网络(其中每层宽度趋于无穷,且所有权重均为具有无限方差的椭圆分布)收敛于一个过程,该过程的每一层具有 $\alpha$-稳定边际,并拥有条件高斯表示。这些条件随机协方差核可以按照 Cho & Saul (2009) 的方式递归关联,尽管从边际上看该过程表现出稳定行为,因此协方差甚至不一定被定义。我们还将 Lor\'ia & Bhadra (2024) 关于浅层网络的最新结果有益地推广到多层网络,并改进了其方法的计算负担。在模拟和基准数据集上的演示中,相较于竞争方法,本方法在计算和统计上的优势十分突出。