To characterize the function space explored by neural networks (NNs) is an important aspect of learning theory. In this work, noticing that a multi-layer NN generates implicitly a hierarchy of reproducing kernel Hilbert spaces (RKHSs) - named a neural Hilbert ladder (NHL) - we define the function space as an infinite union of RKHSs, which generalizes the existing Barron space theory of two-layer NNs. We then establish several theoretical properties of the new space. First, we prove a correspondence between functions expressed by L-layer NNs and those belonging to L-level NHLs. Second, we prove generalization guarantees for learning an NHL with a controlled complexity measure. Third, we derive a non-Markovian dynamics of random fields that governs the evolution of the NHL which is induced by the training of multi-layer NNs in an infinite-width mean-field limit. Fourth, we show examples of depth separation in NHLs under the ReLU activation function. Finally, we perform numerical experiments to illustrate the feature learning aspect of NN training through the lens of NHLs.
翻译:刻画神经网络(NNs)所探索的函数空间是学习理论的重要方面。本文注意到多层神经网络隐式生成了一族再生核希尔伯特空间(RKHSs)的层级结构——称为神经希尔伯特阶梯(NHL)——我们将该函数空间定义为RKHSs的无限并集,这推广了已有的两层神经网络Barron空间理论。随后我们建立了新空间的若干理论性质。首先,我们证明了由L层神经网络表达的函数与属于L级NHL的函数之间的对应关系。其次,我们证明了在受控复杂度度量下学习NHL的泛化保证。第三,我们推导出支配NHL演化的非马尔可夫随机场动力学,该演化由无限宽平均场极限下多层神经网路的训练所诱导。第四,我们展示了ReLU激活函数下NHL中的深度分离实例。最后,通过数值实验,我们从NHL的视角阐释了神经网络训练中的特征学习方面。