"AI for Science" aims to solve fundamental scientific problems using AI techniques. As most physical phenomena can be described as Partial Differential Equations (PDEs) , approximating their solutions using neural networks has evolved as a central component of scientific-ML. Physics-Informed Neural Networks (PINNs) is the general method that has evolved for this task but its training is well-known to be very unstable. In this work we explore the possibility of changing the model being trained from being just a neural network to being a non-linear transformation of it - one that algebraically includes the boundary/initial conditions. This reduces the number of terms in the loss function than the standard PINN losses. We demonstrate that our modification leads to significant performance gains across a range of benchmark tasks, in various dimensions and without having to tweak the training algorithm. Our conclusions are based on conducting hundreds of experiments, in the fully unsupervised setting, over multiple linear and non-linear PDEs set to exactly solvable scenarios, which lends to a concrete measurement of our performance gains in terms of order(s) of magnitude lower fractional errors being achieved, than by standard PINNs. The code accompanying this manuscript is publicly available at, https://github.com/MorganREN/Improving-PINNs-By-Algebraic-Inclusion-of-Boundary-and-Initial-Conditions
翻译:“AI for Science”旨在利用人工智能技术解决基础科学问题。由于大多数物理现象可用偏微分方程描述,使用神经网络逼近其解已成为科学机器学习中的核心组成部分。物理信息神经网络是为该任务发展出的通用方法,但其训练过程众所周知极不稳定。本研究探讨将训练模型从单纯神经网络转变为神经网络的非线性变换的可能性——该变换通过代数方式直接纳入边界/初始条件。相较于标准PINN损失函数,这减少了损失函数的项数。我们证明,该改进方案在一系列基准任务中(涉及不同维度且无需调整训练算法)均带来显著的性能提升。我们的结论基于在完全无监督设置下进行的数百次实验,涵盖多个线性和非线性偏微分方程的精确可解场景,从而能够通过实现比标准PINN低数个数量级的相对误差来具体量化性能增益。本手稿的配套代码公开于:https://github.com/MorganREN/Improving-PINNs-By-Algebraic-Inclusion-of-Boundary-and-Initial-Conditions