Physics-Informed Neural Networks (PINNs) have gained popularity in scientific computing in recent years. However, they often fail to achieve the same level of accuracy as classical methods in solving differential equations. In this paper, we identify two sources of this issue in the case of Cauchy problems: the use of $L^2$ residuals as objective functions and the approximation gap of neural networks. We show that minimizing the sum of $L^2$ residual and initial condition error is not sufficient to guarantee the true solution, as this loss function does not capture the underlying dynamics. Additionally, neural networks are not capable of capturing singularities in the solutions due to the non-compactness of their image sets. This, in turn, influences the existence of global minima and the regularity of the network. We demonstrate that when the global minimum does not exist, machine precision becomes the predominant source of achievable error in practice. We also present numerical experiments in support of our theoretical claims.
翻译:近年来,物理信息神经网络(PINNs)在科学计算领域日益普及。然而,在求解微分方程时,它们往往无法达到与传统方法同等的精度。本文针对柯西问题,指出了导致这一问题的两个根源:使用$L^2$残差作为目标函数以及神经网络的近似能力局限。我们证明,最小化$L^2$残差与初始条件误差之和并不足以保证得到真实解,因为该损失函数未能捕捉系统的内在动力学特性。此外,由于神经网络像集非紧致,其无法捕捉解中的奇异性。这进而影响了全局极小值的存在性与网络的正则性。我们证明,当全局极小值不存在时,机器精度在实践中成为可达到误差的主要来源。本文还提供了支持理论主张的数值实验。