Neural operators improve conventional neural networks by expanding their capabilities of functional mappings between different function spaces to solve partial differential equations (PDEs). One of the most notable methods is the Fourier Neural Operator (FNO), which draws inspiration from Green's function method and directly approximates operator kernels in the frequency domain. However, after empirical observation followed by theoretical validation, we demonstrate that the FNO approximates kernels primarily in a relatively low-frequency domain. This suggests a limited capability in solving complex PDEs, particularly those characterized by rapid coefficient changes and oscillations in the solution space. Such cases are crucial in specific scenarios, like atmospheric convection and ocean circulation. To address this challenge, inspired by the translation equivariant of the convolution kernel, we propose a novel hierarchical Fourier neural operator along with convolution-residual layers and attention mechanisms to make them complementary in the frequency domain to solve complex PDEs. We perform experiments on forward and reverse problems of multiscale elliptic equations, Navier-Stokes equations, and other physical scenarios, and find that the proposed method achieves superior performance in these PDE benchmarks, especially for equations characterized by rapid coefficient variations.
翻译:神经算子通过扩展传统神经网络在不同函数空间之间进行函数映射的能力来求解偏微分方程,从而改进了传统神经网络。其中最为突出的方法之一是傅里叶神经算子,它从格林函数方法中获得灵感,直接在频域中近似算子核。然而,通过经验观察和理论验证,我们证明FNO主要在相对低频的域中近似核。这表明其在求解复杂偏微分方程,特别是那些以系数快速变化和解空间振荡为特征的方程时,能力有限。此类情况在特定场景中至关重要,例如大气对流和海洋环流。为应对这一挑战,受卷积核平移等变性的启发,我们提出了一种新颖的分层傅里叶神经算子,并结合卷积残差层和注意力机制,使它们在频域中互补,以求解复杂偏微分方程。我们在多尺度椭圆方程、Navier-Stokes方程及其他物理场景的正向和反向问题上进行了实验,发现所提出的方法在这些偏微分方程基准测试中取得了优越的性能,尤其对于以系数快速变化为特征的方程。