We investigate a Tikhonov regularization scheme specifically tailored for shallow neural networks within the context of solving a classic inverse problem: approximating an unknown function and its derivatives within a unit cubic domain based on noisy measurements. The proposed Tikhonov regularization scheme incorporates a penalty term that takes three distinct yet intricately related network (semi)norms: the extended Barron norm, the variation norm, and the Radon-BV seminorm. These choices of the penalty term are contingent upon the specific architecture of the neural network being utilized. We establish the connection between various network norms and particularly trace the dependence of the dimensionality index, aiming to deepen our understanding of how these norms interplay with each other. We revisit the universality of function approximation through various norms, establish rigorous error-bound analysis for the Tikhonov regularization scheme, and explicitly elucidate the dependency of the dimensionality index, providing a clearer understanding of how the dimensionality affects the approximation performance and how one designs a neural network with diverse approximating tasks.
翻译:我们研究了一种专门为浅层神经网络设计的Tikhonov正则化方案,用于解决一个经典的反问题:基于含噪声测量值,在单位立方体域内逼近未知函数及其导数。所提出的Tikhonov正则化方案引入了一个惩罚项,该惩罚项包含三种不同但紧密相关的网络(半)范数:扩展Barron范数、变分范数和Radon-BV半范数。惩罚项的选择取决于所使用的神经网络的具体架构。我们建立了各种网络范数之间的联系,并特别追踪了维度指标的依赖性,旨在加深对这些范数之间相互作用的理解。我们重新审视了通过不同范数实现函数逼近的普适性,为Tikhonov正则化方案建立了严格的误差界分析,并明确阐释了维度指标的依赖性,从而更清晰地理解了维度如何影响逼近性能,以及如何为不同的逼近任务设计神经网络。