Physics-Informed Neural Networks (PINNs) have emerged as an iconic machine learning approach for solving Partial Differential Equations (PDEs). Although its variants have achieved significant progress, the empirical success of utilising feature mapping from the wider Implicit Neural Representations studies has been substantially neglected. We investigate the training dynamics of PINNs with a feature mapping layer via the limiting Conjugate Kernel and Neural Tangent Kernel, which sheds light on the convergence and generalisation of the model. We also show the inadequacy of commonly used Fourier-based feature mapping in some scenarios and propose the conditional positive definite Radial Basis Function as a better alternative. The empirical results reveal the efficacy of our method in diverse forward and inverse problem sets. This simple technique can be easily implemented in coordinate input networks and benefits the broad PINNs research.
翻译:物理信息神经网络(PINNs)已成为求解偏微分方程(PDEs)的标志性机器学习方法。尽管其变体取得了显著进展,但广泛隐式神经表征研究中特征映射技术的实证成功在很大程度上被忽视了。我们通过极限共轭核和神经正切核研究了带特征映射层的PINNs训练动力学,揭示了模型的收敛性与泛化性。同时,我们证明了常用傅里叶特征映射在某些场景下的不足,并提出以条件正定径向基函数作为更优替代。实证结果表明,该方法在前向与逆向问题集上均具有有效性。该简单技术可轻松实现于坐标输入网络,并为广泛的PINNs研究带来助益。