Solving partial differential equations remains a central challenge in scientific machine learning. Neural operators offer a promising route by learning mappings between function spaces and enabling resolution-independent inference, yet they typically require supervised data. Physics-informed neural networks address this limitation through unsupervised training with physical constraints but often suffer from unstable convergence and limited generalization capability. To overcome these issues, we introduce a multi-stage physics-informed training strategy that achieves convergence by progressively enforcing boundary conditions in the loss landscape and subsequently incorporating interior residuals. At each stage the optimizer is re-initialized, acting as a continuation mechanism that restores stability and prevents gradient stagnation. We further propose the Physics-Informed Spline Fourier Neural Operator (PhIS-FNO), combining Fourier layers with Hermite spline kernels for smooth residual evaluation. Across canonical benchmarks, PhIS-FNO attains a level of accuracy comparable to that of supervised learning, using labeled information only along a narrow boundary region, establishing staged, spline-based optimization as a robust paradigm for physics-informed operator learning.
翻译:求解偏微分方程仍然是科学机器学习中的核心挑战。神经算子通过学习函数空间之间的映射并实现分辨率无关的推理,提供了一条有前景的途径,但它们通常需要监督数据。物理信息神经网络通过物理约束进行无监督训练来应对这一局限,但常常面临收敛不稳定和泛化能力有限的问题。为克服这些难题,我们引入了一种多阶段物理信息训练策略,该策略通过在损失函数中逐步施加边界条件并随后纳入内部残差来实现收敛。在每个阶段,优化器被重新初始化,作为一种延拓机制来恢复稳定性并防止梯度停滞。我们进一步提出了物理信息样条傅里叶神经算子,它将傅里叶层与埃尔米特样条核相结合,以实现平滑的残差评估。在多个经典基准测试中,PhIS-FNO达到了与监督学习相当的精度水平,仅需在狭窄的边界区域使用标注信息,从而确立了基于样条的分阶段优化作为物理信息算子学习的稳健范式。