Physics-informed neural networks (PINNs) have recently emerged as a promising way to compute the solutions of partial differential equations (PDEs) using deep neural networks. However, despite their significant success in various fields, it remains unclear in many aspects how to effectively train PINNs if the solutions of PDEs exhibit stiff behaviors or high frequencies. In this paper, we propose a new method for training PINNs using variable-scaling techniques. This method is simple and it can be applied to a wide range of problems including PDEs with rapidly-varying solutions. Throughout various numerical experiments, we will demonstrate the effectiveness of the proposed method for these problems and confirm that it can significantly improve the training efficiency and performance of PINNs. Furthermore, based on the analysis of the neural tangent kernel (NTK), we will provide theoretical evidence for this phenomenon and show that our methods can indeed improve the performance of PINNs.
翻译:物理信息神经网络(PINNs)近年来已成为一种利用深度神经网络求解偏微分方程(PDEs)的有效方法。然而,尽管其在多个领域取得了显著成功,当偏微分方程的解呈现刚性行为或高频特性时,如何有效训练PINNs在许多方面仍不明确。本文提出一种基于变量缩放技术的PINNs训练新方法。该方法简单易行,可广泛应用于包括具有快速变化解的偏微分方程在内的多种问题。通过大量数值实验,我们将证明该方法对此类问题的有效性,并证实其能显著提升PINNs的训练效率与性能。此外,基于神经正切核(NTK)的理论分析,我们将为这一现象提供理论依据,并证明所提方法确实能够改善PINNs的性能。