In recent years, Solving partial differential equations has shifted the focus of traditional neural network studies from finite-dimensional Euclidean spaces to generalized functional spaces in research. A novel methodology is to learn an operator as a means of approximating the mapping between outputs. Currently, researchers have proposed a variety of operator architectures. Nevertheless, the majority of these architectures adopt an iterative update architecture, whereby a single operator is learned from the same function space. In practical physical science problems, the numerical solutions of partial differential equations are complex, and a serial single operator is unable to accurately approximate the intricate mapping between input and output. So, We propose a deep parallel operator model (DPNO) for efficiently and accurately solving partial differential equations. DPNO employs convolutional neural networks to extract local features and map data into distinct latent spaces. Designing a parallel block of double Fourier neural operators to solve the iterative error problem. DPNO approximates complex mappings between inputs and outputs by learning multiple operators in different potential spaces in parallel blocks. DPNO achieved the best performance on five of them, with an average improvement of 10.5\%, and ranked second on one dataset.
翻译:近年来,求解偏微分方程的研究已将传统神经网络研究的焦点从有限维欧几里得空间转向广义函数空间。一种新颖的方法是通过学习算子来近似输出之间的映射关系。目前,研究者已提出多种算子架构。然而,这些架构大多采用迭代更新结构,即从同一函数空间学习单一算子。在实际物理科学问题中,偏微分方程的数值解具有复杂性,串行单一算子难以精确逼近输入与输出间复杂的映射关系。为此,我们提出一种深度并行算子模型(DPNO),用于高效精确地求解偏微分方程。DPNO采用卷积神经网络提取局部特征,并将数据映射至不同的潜在空间。通过设计双傅里叶神经算子的并行模块以解决迭代误差问题。DPNO通过在并行模块中学习不同潜在空间内的多个算子,实现对输入与输出间复杂映射关系的逼近。在六个数据集中,DPNO在其中五个数据集上取得了最佳性能,平均提升10.5\%,在另一个数据集上位列第二。