We present a subspace method based on neural networks for solving the partial differential equation in weak form with high accuracy. The basic idea of our method is to use some functions based on neural networks as base functions to span a subspace, then find an approximate solution in this subspace. Training base functions and finding an approximate solution can be separated, that is different methods can be used to train these base functions, and different methods can also be used to find an approximate solution. In this paper, we find an approximate solution of the partial differential equation in the weak form. Our method can achieve high accuracy with low cost of training. Numerical examples show that the cost of training these base functions is low, and only one hundred to two thousand epochs are needed for most tests. The error of our method can fall below the level of $10^{-7}$ for some tests. The proposed method has the better performance in terms of the accuracy and computational cost.
翻译:我们提出了一种基于神经网络的子空间方法,用于高精度求解弱形式的偏微分方程。该方法的核心思想是利用基于神经网络的函数作为基函数张成子空间,并在该子空间中寻找近似解。基函数的训练与近似解的求解可以分离,即可采用不同方法训练基函数,也可采用不同方法求解近似解。本文在弱形式下求解偏微分方程的近似解。所提出方法能以较低的训练成本实现高精度。数值实验表明,这些基函数的训练成本较低,大多数测试仅需100至2000个训练周期。在某些测试中,该方法误差可降至$10^{-7}$级别。所提方法在精度和计算成本方面均表现出更优性能。