This work introduces a paradigm for constructing parametric neural operators that are derived from finite-dimensional representations of Green's operators, with learnable Green's functions, for linear partial differential equations (PDEs). We refer to such neural operators as Neural Green's Operators (NGOs). Our construction of NGOs preserves the linear action of Green's operators on the inhomogeneity fields, while approximating the nonlinear dependence of the Green's function on the coefficients of the PDE using neural networks that take weighted averages of such coefficients as input. This construction reduces the complexity of the problem from learning the entire solution operator and its dependence on all parameters to only learning the Green's function and its dependence on the PDE coefficients. Moreover, taking weighted averages, rather than point samples, of input functions decouples the network size from the number of sampling points, enabling efficient resolution of multiple scales in the input fields. Furthermore, we show that our explicit representation of Green's functions enables the embedding of desirable mathematical attributes in our NGO architectures, such as symmetry, spectral, and conservation properties. Through numerical benchmarks on canonical PDEs, we demonstrate that NGOs achieve comparable or superior accuracy to deep operator networks, variationally mimetic operator networks, and Fourier neural operators with similar parameter counts, while generalizing significantly better when tested on out-of-distribution data. For time-dependent PDEs, we show that NGOs can produce pointwise-accurate dynamics in an auto-regressive manner when trained on a single time step. Finally, we show that we can leverage the explicit representation of Green's functions returned by NGOs to construct effective matrix preconditioners that accelerate iterative solvers for PDEs.
翻译:本研究提出了一种构建参数化神经算子的新范式,该算子源自格林算子的有限维表示,并具有可学习的格林函数,适用于线性偏微分方程。我们将此类神经算子称为神经格林算子。我们的NGO构建方法保留了格林算子对非齐次场的线性作用,同时利用神经网络逼近格林函数对PDE系数的非线性依赖关系——这些神经网络以系数的加权平均作为输入。该构造将问题的复杂性从学习整个解算子及其对所有参数的依赖,简化为仅学习格林函数及其对PDE系数的依赖。此外,对输入函数采用加权平均而非点采样的方式,使网络规模与采样点数量解耦,从而能够有效解析输入场中的多尺度特征。进一步地,我们证明了格林函数的显式表示使得我们能够在NGO架构中嵌入所需的数学属性,如对称性、谱特性和守恒性质。通过对典型PDE的数值基准测试,我们证明NGO在参数量相近的情况下,其精度与深度算子网络、变分拟态算子网络和傅里叶神经算子相当或更优,且在分布外数据测试中表现出显著更好的泛化能力。对于时间相关的PDE,我们证明NGO在单时间步训练后能以自回归方式生成逐点精确的动力学行为。最后,我们展示了可利用NGO返回的格林函数显式表示来构建有效的矩阵预条件子,从而加速PDE迭代求解器的收敛。