Operator learning has emerged as a promising paradigm for developing efficient surrogate models to solve partial differential equations (PDEs). However, existing approaches often overlook the domain knowledge inherent in the underlying PDEs and hence suffer from challenges in capturing temporal dynamics and generalization issues beyond training time frames. This paper introduces a deep neural ordinary differential equation (ODE) operator network framework, termed NODE-ONet, to alleviate these limitations. The framework adopts an encoder-decoder architecture comprising three core components: an encoder that spatially discretizes input functions, a neural ODE capturing latent temporal dynamics, and a decoder reconstructing solutions in physical spaces. Theoretically, error analysis for the encoder-decoder architecture is investigated. Computationally, we propose novel physics-encoded neural ODEs to incorporate PDE-specific physical properties. Such well-designed neural ODEs significantly reduce the framework's complexity while enhancing numerical efficiency, robustness, applicability, and generalization capacity. Numerical experiments on nonlinear diffusion-reaction and Navier-Stokes equations demonstrate high accuracy, computational efficiency, and prediction capabilities beyond training time frames. Additionally, the framework's flexibility to accommodate diverse encoders/decoders and its ability to generalize across related PDE families further underscore its potential as a scalable, physics-encoded tool for scientific machine learning.
翻译:算子学习已成为开发高效代理模型以求解偏微分方程(PDEs)的一种前景广阔的研究范式。然而,现有方法往往忽视偏微分方程本身蕴含的领域知识,因此在捕捉时间动态以及泛化至训练时间范围之外时面临挑战。本文提出一种深度神经常微分方程(ODE)算子网络框架,命名为NODE-ONet,以缓解这些局限性。该框架采用编码器-解码器架构,包含三个核心组件:对输入函数进行空间离散化的编码器、捕捉潜在时间动态的神经ODE,以及在物理空间中重构解的解码器。理论上,本文研究了该编码器-解码器架构的误差分析。在计算层面,我们提出了新颖的物理编码神经ODE,以融入偏微分方程特有的物理属性。这种精心设计的神经ODE显著降低了框架的复杂度,同时提升了数值效率、鲁棒性、适用性以及泛化能力。在非线性扩散-反应方程和Navier-Stokes方程上的数值实验表明,该方法在训练时间范围之外仍具有高精度、高计算效率及良好的预测能力。此外,该框架能够灵活适配多种编码器/解码器,并具备跨相关偏微分方程族泛化的能力,进一步凸显了其作为可扩展的、物理编码工具在科学机器学习领域的潜力。