Studying the function spaces defined by neural networks helps to understand the corresponding learning models and their inductive bias. While in some limits neural networks correspond to function spaces that are reproducing kernel Hilbert spaces, these regimes do not capture the properties of the networks used in practice. In contrast, in this paper we show that deep neural networks define suitable reproducing kernel Banach spaces. These spaces are equipped with norms that enforce a form of sparsity, enabling them to adapt to potential latent structures within the input data and their representations. In particular, leveraging the theory of reproducing kernel Banach spaces, combined with variational results, we derive representer theorems that justify the finite architectures commonly employed in applications. Our study extends analogous results for shallow networks and can be seen as a step towards considering more practically plausible neural architectures.
翻译:研究神经网络定义的函数空间有助于理解相应学习模型及其归纳偏置。虽然在特定极限下神经网络对应于再生核希尔伯特空间的函数空间,但这些极限情况并未捕捉到实际应用网络的性质。本文证明深层神经网络定义了合适的再生核巴拿赫空间。这些空间配备了强制稀疏性的范数,使其能够适应输入数据及其表示中的潜在隐结构。具体而言,通过结合再生核巴拿赫空间理论与变分结果,我们推导出表示定理,证实了应用中常用的有限架构的合理性。本研究扩展了浅层网络的类似结论,可视为向更具实际可行性的神经架构迈进的一步。