Utilizing physics-informed neural networks (PINN) to solve partial differential equations (PDEs) becomes a hot issue and also shows its great powers, but still suffers from the dilemmas of limited predicted accuracy in the sampling domain and poor prediction ability beyond the sampling domain which are usually mitigated by adding the physical properties of PDEs into the loss function or by employing smart techniques to change the form of loss function for special PDEs. In this paper, we design a symmetry-enhanced deep neural network (sDNN) which makes the architecture of neural networks invariant under the finite group through expanding the dimensions of weight matrixes and bias vectors in each hidden layers by the order of finite group if the group has matrix representations, otherwise extending the set of input data and the hidden layers except for the first hidden layer by the order of finite group. However, the total number of training parameters is only about one over the order of finite group of the original PINN size due to the symmetric architecture of sDNN. Furthermore, we give special forms of weight matrixes and bias vectors of sDNN, and rigorously prove that the architecture itself is invariant under the finite group and the sDNN has the universal approximation ability to learn the function keeping the finite group. Numerical results show that the sDNN has strong predicted abilities in and beyond the sampling domain and performs far better than the vanilla PINN with fewer training points and simpler architecture.
翻译:利用物理信息神经网络(PINN)求解偏微分方程(PDE)已成为热点问题,并展现出强大能力,但仍面临采样域内预测精度有限以及采样域外预测能力不足的困境。这些困境通常通过将PDE的物理特性融入损失函数,或采用巧妙技术改变特殊PDE损失函数的形式来缓解。本文设计了一种对称性增强深度神经网络(sDNN),若有限群具有矩阵表示,则通过将每个隐藏层的权重矩阵和偏置向量维度按有限群阶数扩展;否则通过将输入数据集及除第一层外的隐藏层按有限群阶数扩展,从而使神经网络架构在有限群下保持不变。然而,由于sDNN的对称架构,其训练参数总数仅为原始PINN规模约1/群阶数。进一步,我们给出了sDNN权重矩阵和偏置向量的特殊形式,并严格证明了该架构本身在有限群下具有不变性,且sDNN具备学习保持有限群函数的通用逼近能力。数值结果表明,sDNN在采样域内外均表现出强大的预测能力,在更少训练点和更简洁架构下,其性能远超原始PINN。