We study learning-theoretic foundations of operator learning, using the linear layer of the Fourier Neural Operator architecture as a model problem. First, we identify three main errors that occur during the learning process: statistical error due to finite sample size, truncation error from finite rank approximation of the operator, and discretization error from handling functional data on a finite grid of domain points. Finally, we analyze a Discrete Fourier Transform (DFT) based least squares estimator, establishing both upper and lower bounds on the aforementioned errors.
翻译:本研究探讨算子学习的理论基础,以傅里叶神经算子架构中的线性层作为模型问题。首先,我们识别了学习过程中出现的三类主要误差:由有限样本量引起的统计误差、算子有限秩近似产生的截断误差,以及处理定义域有限网格点上的函数数据时产生的离散化误差。最后,我们分析了基于离散傅里叶变换的最小二乘估计器,为上述误差建立了上界与下界。