We investigate a Tikhonov regularization scheme specifically tailored for shallow neural networks within the context of solving a classic inverse problem: approximating an unknown function and its derivatives within a unit cubic domain based on noisy measurements. The proposed Tikhonov regularization scheme incorporates a penalty term that takes three distinct yet intricately related network (semi)norms: the extended Barron norm, the variation norm, and the Radon-BV seminorm. These choices of the penalty term are contingent upon the specific architecture of the neural network being utilized. We establish the connection between various network norms and particularly trace the dependence of the dimensionality index, aiming to deepen our understanding of how these norms interplay with each other. We revisit the universality of function approximation through various norms, establish rigorous error-bound analysis for the Tikhonov regularization scheme, and explicitly elucidate the dependency of the dimensionality index, providing a clearer understanding of how the dimensionality affects the approximation performance and how one designs a neural network with diverse approximating tasks.
翻译:我们研究一种专门为浅层神经网络设计的Tikhonov正则化方案,用于解决一个经典逆问题:基于含噪声测量值在单位立方体域内逼近未知函数及其导数。所提出的Tikhonov正则化方案包含一个惩罚项,该惩罚项采用三种不同但内在关联的网络(半)范数:扩展Barron范数、变差范数以及Radon-BV半范数。惩罚项的具体选择取决于所使用神经网络的特定架构。我们建立了不同网络范数之间的联系,并特别追踪了维度指标的依赖性,旨在深化对这些范数如何相互影响的理解。我们通过多种范数重新审视了函数逼近的普适性,为Tikhonov正则化方案建立了严格的误差界分析,并明确阐释了维度指标的依赖性,从而更清晰地理解维度如何影响逼近性能,以及如何设计适用于不同逼近任务的神经网络。