We present a novel and mathematically transparent approach to function approximation and the training of large, high-dimensional neural networks, based on the approximate least-squares solution of associated Fredholm integral equations of the first kind by Ritz-Galerkin discretization, Tikhonov regularization and tensor-train methods. Practical application to supervised learning problems of regression and classification type confirm that the resulting algorithms are competitive with state-of-the-art neural network-based methods.
翻译:本文提出了一种新颖且数学上透明的方法,用于函数逼近及大规模高维神经网络的训练。该方法基于Ritz-Galerkin离散化、Tikhonov正则化与张量列分解技术,对相关第一类Fredholm积分方程进行近似最小二乘求解。在回归与分类类型的监督学习问题中的实际应用表明,所得算法与当前最先进的基于神经网络的方法具有竞争力。