Physics-informed neural networks (PINNs) represent a significant advancement in scientific machine learning by integrating fundamental physical laws into their architecture through loss functions. PINNs have been successfully applied to solve various forward and inverse problems in partial differential equations (PDEs). However, a notable challenge can emerge during the early training stages when solving inverse problems. Specifically, data losses remain high while PDE residual losses are minimized rapidly, thereby exacerbating the imbalance between loss terms and impeding the overall efficiency of PINNs. To address this challenge, this study proposes a novel framework termed data-guided physics-informed neural networks (DG-PINNs). The DG-PINNs framework is structured into two distinct phases: a pre-training phase and a fine-tuning phase. In the pre-training phase, a loss function with only the data loss is minimized in a neural network. In the fine-tuning phase, a composite loss function, which consists of the data loss, PDE residual loss, and, if available, initial and boundary condition losses, is minimized in the same neural network. Notably, the pre-training phase ensures that the data loss is already at a low value before the fine-tuning phase commences. This approach enables the fine-tuning phase to converge to a minimal composite loss function with fewer iterations compared to existing PINNs. To validate the effectiveness, noise-robustness, and efficiency of DG-PINNs, extensive numerical investigations are conducted on inverse problems related to several classical PDEs, including the heat equation, wave equation, Euler--Bernoulli beam equation, and Navier--Stokes equation. The numerical results demonstrate that DG-PINNs can accurately solve these inverse problems and exhibit robustness against noise in training data.
翻译:物理信息神经网络(PINNs)通过在损失函数中融入基本物理定律,代表了科学机器学习领域的重要进展。PINNs已成功应用于求解偏微分方程(PDEs)中的各类正反问题。然而,在求解反问题时,训练初期可能出现一个显著挑战:数据损失项保持高位,而PDE残差损失项迅速最小化,从而加剧损失项间的不平衡,阻碍PINNs的整体效率。为应对这一挑战,本研究提出了一种称为数据引导物理信息神经网络(DG-PINNs)的新框架。该框架包含两个独立阶段:预训练阶段和微调阶段。在预训练阶段,神经网络仅通过数据损失项进行损失函数最小化;在微调阶段,同一神经网络对包含数据损失、PDE残差损失及初始/边界条件损失(若存在)的复合损失函数进行最小化。值得注意的是,预训练阶段确保在微调开始前数据损失已处于较低水平。相较于现有PINNs,该方法能使微调阶段以更少迭代次数收敛至复合损失函数最小值。为验证DG-PINNs的有效性、噪声鲁棒性和效率,本研究对多个经典PDEs(包括热传导方程、波动方程、欧拉-伯努利梁方程和纳维-斯托克斯方程)的反问题进行了系统的数值实验。数值结果表明,DG-PINNs能精确求解这些反问题,并对训练数据中的噪声表现出良好的鲁棒性。