Physics-informed neural networks (PINNs) have recently emerged as effective methods for solving partial differential equations (PDEs) in various problems. Substantial research focuses on the failure modes of PINNs due to their frequent inaccuracies in predictions. However, most are based on the premise that minimizing the loss function to zero causes the network to converge to a solution of the governing PDE. In this study, we prove that PINNs encounter a fundamental issue that the premise is invalid. We also reveal that this issue stems from the inability to regulate the behavior of the derivatives of the predicted solution. Inspired by the \textit{derivative pathology} of PINNs, we propose a \textit{variable splitting} strategy that addresses this issue by parameterizing the gradient of the solution as an auxiliary variable. We demonstrate that using the auxiliary variable eludes derivative pathology by enabling direct monitoring and regulation of the gradient of the predicted solution. Moreover, we prove that the proposed method guarantees convergence to a generalized solution for second-order linear PDEs, indicating its applicability to various problems.
翻译:物理信息神经网络(PINNs)近年来已成为求解各类问题中偏微分方程(PDE)的有效方法。大量研究关注PINNs因其预测频繁不准确而出现的失效模式。然而,这些研究大多基于一个前提:将损失函数最小化至零会导致网络收敛到控制PDE的解。在本研究中,我们证明了PINNs面临一个根本性问题,即该前提并不成立。我们还揭示了此问题源于无法调控预测解导数的行为。受PINNs“导数病态”的启发,我们提出一种“变量分裂”策略,通过将解的梯度参数化为辅助变量来解决该问题。我们证明,使用辅助变量能够通过直接监测和调控预测解的梯度来规避导数病态。此外,我们证明了所提方法能保证对二阶线性PDE收敛到广义解,表明其适用于各类问题。