The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets. We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces. We formulate the neural operator as a composition of linear integral operators and nonlinear activation functions. We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator. The proposed neural operators are also discretization-invariant, i.e., they share the same model parameters among different discretization of the underlying function spaces. Furthermore, we introduce four classes of efficient parameterization, viz., graph neural operators, multi-pole graph neural operators, low-rank neural operators, and Fourier neural operators. An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations (PDEs). We consider standard PDEs such as the Burgers, Darcy subsurface flow, and the Navier-Stokes equations, and show that the proposed neural operators have superior performance compared to existing machine learning based methodologies, while being several orders of magnitude faster than conventional PDE solvers.
翻译:神经网络的传统发展主要集中于学习有限维欧几里得空间或有限集合之间的映射。我们提出了一种神经网络的推广形式,称为神经算子,用于学习无限维函数空间之间的映射。我们将神经算子构建为线性积分算子与非线性激活函数的复合形式。我们证明了所提出的神经算子的通用逼近定理,表明其能够逼近任意给定的非线性连续算子。所提出的神经算子还具有离散化不变性,即在不同函数空间离散化下共享相同的模型参数。此外,我们引入了四类高效的参数化方法,即图神经算子、多极图神经算子、低秩神经算子和傅里叶神经算子。神经算子的一个重要应用是学习偏微分方程(PDE)解算子的替代映射。我们考虑了标准偏微分方程,如Burgers方程、Darcy地下渗流方程和Navier-Stokes方程,并证明所提出的神经算子相较于现有基于机器学习的方法具有更优性能,同时比传统偏微分方程求解器快数个数量级。