PDEs arise ubiquitously in science and engineering, where solutions depend on parameters (physical properties, boundary conditions, geometry). Traditional numerical methods require re-solving the PDE for each parameter, making parameter space exploration prohibitively expensive. Recent machine learning advances, particularly physics-informed neural networks (PINNs) and neural operators, have revolutionized parametric PDE solving by learning solution operators that generalize across parameter spaces. We critically analyze two main paradigms: (1) PINNs, which embed physical laws as soft constraints and excel at inverse problems with sparse data, and (2) neural operators (e.g., DeepONet, Fourier Neural Operator), which learn mappings between infinite-dimensional function spaces and achieve unprecedented generalization. Through comparisons across fluid dynamics, solid mechanics, heat transfer, and electromagnetics, we show neural operators can achieve computational speedups of $10^3$ to $10^5$ times faster than traditional solvers for multi-query scenarios, while maintaining comparable accuracy. We provide practical guidance for method selection, discuss theoretical foundations (universal approximation, convergence), and identify critical open challenges: high-dimensional parameters, complex geometries, and out-of-distribution generalization. This work establishes a unified framework for understanding parametric PDE solvers via operator learning, offering a comprehensive, incrementally updated resource for this rapidly evolving field
翻译:偏微分方程在科学与工程领域普遍存在,其解通常依赖于参数(物理属性、边界条件、几何形状)。传统数值方法需要对每个参数重新求解偏微分方程,这使得参数空间探索的计算成本极其高昂。近期机器学习进展,特别是物理信息神经网络和神经算子,通过学习可在参数空间泛化的解算子,彻底改变了参数化偏微分方程的求解方式。我们批判性分析了两大主流范式:(1)物理信息神经网络——将物理定律作为软约束嵌入,在稀疏数据反问题中表现卓越;(2)神经算子(如DeepONet、傅里叶神经算子)——学习无限维函数空间之间的映射关系,实现了前所未有的泛化能力。通过流体动力学、固体力学、传热学和电磁学领域的对比研究,我们证明神经算子在多查询场景下可比传统求解器实现$10^3$至$10^5$倍的计算加速,同时保持相当的精度。我们提供了方法选择的实践指导,探讨了理论基础(通用逼近定理、收敛性),并指出了关键开放挑战:高维参数、复杂几何结构以及分布外泛化问题。本研究通过算子学习建立了理解参数化偏微分方程求解器的统一框架,为该快速发展领域提供了全面且持续更新的资源。