Physics-Informed Neural Networks (PINNs) have been successfully applied to solve Partial Differential Equations (PDEs). Their loss function is founded on a strong residual minimization scheme. Variational Physics-Informed Neural Networks (VPINNs) are their natural extension to weak variational settings. In this context, the recent work of Robust Variational Physics-Informed Neural Networks (RVPINNs) highlights the importance of conveniently translating the norms of the underlying continuum-level spaces to the discrete level. Otherwise, VPINNs might become unrobust, implying that residual minimization might be highly uncorrelated with a desired minimization of the error in the energy norm. However, applying this robustness to VPINNs typically entails dealing with the inverse of a Gram matrix, usually producing slow convergence speeds during training. In this work, we accelerate the implementation of RVPINN, establishing a LU factorization of sparse Gram matrix in a kind of point-collocation scheme with the same spirit as original PINNs. We call out method the Collocation-based Robust Variational Physics Informed Neural Networks (CRVPINN). We test our efficient CRVPINN algorithm on Laplace, advection-diffusion, and Stokes problems in two spatial dimensions.
翻译:物理信息神经网络(PINNs)已成功应用于求解偏微分方程(PDEs),其损失函数建立在强残差最小化方案之上。变分物理信息神经网络(VPINNs)是其在弱变分设定下的自然延伸。在此背景下,近期提出的鲁棒变分物理信息神经网络(RVPINNs)强调了将底层连续空间范数恰当转换到离散层面的重要性。否则,VPINNs可能变得不鲁棒,这意味着残差最小化可能与能量范数误差的理想最小化高度不相关。然而,将这种鲁棒性应用于VPINNs通常涉及处理格拉姆矩阵的逆,这往往导致训练收敛速度缓慢。在本工作中,我们加速了RVPINN的实现,通过在一种与原始PINNs思想一致的点配点方案中建立稀疏格拉姆矩阵的LU分解。我们将此方法称为基于配点法的鲁棒变分物理信息神经网络(CRVPINN)。我们在二维空间中的拉普拉斯方程、对流-扩散方程和斯托克斯问题上测试了高效的CRVPINN算法。