Recent advances in scientific machine learning (SciML) have enabled neural operators (NOs) to serve as powerful surrogates for modeling the dynamic evolution of physical systems governed by partial differential equations (PDEs). While existing approaches focus primarily on learning simulations from the target PDE, they often overlook more fundamental physical principles underlying these equations. Inspired by how numerical solvers are compatible with simulations of different settings of PDEs, we propose a multiphysics training framework that jointly learns from both the original PDEs and their simplified basic forms. Our framework enhances data efficiency, reduces predictive errors, and improves out-of-distribution (OOD) generalization, particularly in scenarios involving shifts of physical parameters and synthetic-to-real transfer. Our method is architecture-agnostic and demonstrates consistent improvements in normalized root mean square error (nRMSE) across a wide range of 1D/2D/3D PDE problems. Through extensive experiments, we show that explicit incorporation of fundamental physics knowledge significantly strengthens the generalization ability of neural operators. We will release models and codes at https://sites.google.com/view/sciml-fundemental-pde.
翻译:科学机器学习(SciML)的最新进展使得神经算子(NOs)能够作为强大的替代模型,用于模拟由偏微分方程(PDEs)控制的物理系统的动态演化。尽管现有方法主要侧重于从目标PDE中学习模拟,但它们往往忽视了这些方程背后更基础的物理原理。受数值求解器能够兼容不同PDE设置模拟的启发,我们提出了一种多物理场训练框架,该框架能够同时从原始PDE及其简化基本形式中学习。我们的框架提高了数据效率,减少了预测误差,并改善了分布外(OOD)泛化能力,特别是在涉及物理参数偏移和合成到真实迁移的场景中。我们的方法与架构无关,并在广泛的1D/2D/3D PDE问题上展示了归一化均方根误差(nRMSE)的持续改进。通过大量实验,我们证明,显式地融入基础物理知识能显著增强神经算子的泛化能力。我们将在 https://sites.google.com/view/sciml-fundemental-pde 发布模型和代码。