In recent engineering applications using deep learning, physics-informed neural network (PINN) is a new development as it can exploit the underlying physics of engineering systems. The novelty of PINN lies in the use of partial differential equations (PDE) for the loss function. Most PINNs are implemented using automatic differentiation (AD) for training the PDE loss functions. A lesser well-known study is the use of finite difference method (FDM) as an alternative. Unlike an AD based PINN, an immediate benefit of using a FDM based PINN is low implementation cost. In this paper, we propose the use of finite difference method for estimating the PDE loss functions in PINN. Our work is inspired by computational analysis in electromagnetic systems that traditionally solve Laplace's equation using successive over-relaxation. In the case of Laplace's equation, our PINN approach can be seen as taking the Laplacian filter response of the neural network output as the loss function. Thus, the implementation of PINN can be very simple. In our experiments, we tested PINN on Laplace's equation and Burger's equation. We showed that using FDM, PINN consistently outperforms non-PINN based deep learning. When comparing to AD based PINNs, we showed that our method is faster to compute as well as on par in terms of error reduction.
翻译:在近期采用深度学习的工程应用中,物理信息神经网络(PINN)作为一种能够利用工程系统底层物理特性的新方法而得到发展。PINN的创新之处在于将偏微分方程(PDE)用于损失函数构建。大多数PINN采用自动微分(AD)来训练PDE损失函数。一项较少被关注的研究是使用有限差分法(FDM)作为替代方案。与基于AD的PINN不同,基于FDM的PINN具有实现成本低的直接优势。本文提出使用有限差分法估算PINN中的PDE损失函数。我们的工作受到电磁系统计算分析的启发,该领域传统上采用逐次超松弛法求解拉普拉斯方程。对于拉普拉斯方程,我们的PINN方法可视为将神经网络输出的拉普拉斯滤波器响应作为损失函数。因此,PINN的实现可以非常简洁。在实验中,我们在拉普拉斯方程和伯格斯方程上测试了PINN。结果表明,采用FDM的PINN在性能上持续优于非PINN的深度学习方法。与基于AD的PINN相比,我们的方法计算速度更快,且在误差降低方面表现相当。