Although physics-informed neural networks (PINNs) have shown great potential in dealing with nonlinear partial differential equations (PDEs), it is common that PINNs will suffer from the problem of insufficient precision or obtaining incorrect outcomes. Unlike most of the existing solutions trying to enhance the ability of PINN by optimizing the training process, this paper improved the neural network architecture to improve the performance of PINN. We propose a densely multiply PINN (DM-PINN) architecture, which multiplies the output of a hidden layer with the outputs of all the behind hidden layers. Without introducing more trainable parameters, this effective mechanism can significantly improve the accuracy of PINNs. The proposed architecture is evaluated on four benchmark examples (Allan-Cahn equation, Helmholtz equation, Burgers equation and 1D convection equation). Comparisons between the proposed architecture and different PINN structures demonstrate the superior performance of the DM-PINN in both accuracy and efficiency.
翻译:尽管物理信息神经网络(PINNs)在处理非线性偏微分方程(PDEs)方面展现出巨大潜力,但PINNs普遍存在精度不足或获得错误结果的问题。与大多数现有解决方案试图通过优化训练过程来增强PINN能力不同,本文通过改进神经网络架构来提升PINN的性能。我们提出了一种密集乘法PINN(DM-PINN)架构,该架构将隐藏层的输出与其后所有隐藏层的输出相乘。在不引入更多可训练参数的情况下,这种有效机制能显著提高PINNs的精度。所提出的架构在四个基准示例(Allen-Cahn方程、Helmholtz方程、Burgers方程和一维对流方程)上进行了评估。所提架构与不同PINN结构之间的比较表明,DM-PINN在精度和效率方面均具有优越性能。