We introduce a conceptual framework for numerically solving linear elliptic, parabolic, and hyperbolic PDEs on bounded, polytopal domains in euclidean spaces by deep neural networks. The PDEs are recast as minimization of a least-squares (LSQ for short) residual of an equivalent, well-posed first-order system, over parametric families of deep neural networks. The associated LSQ residual is a) equal or proportional to a weak residual of the PDE, b) additive in terms of contributions from localized subnetworks, indicating locally ``out-of-equilibrium'' of neural networks with respect to the PDE residual, c) serves as numerical loss function for neural network training, and d) constitutes, even with incomplete training, a computable, (quasi-)optimal numerical error estimator in the context of adaptive LSQ finite element methods. In addition, an adaptive neural network growth strategy is proposed which, assuming exact numerical minimization of the LSQ loss functional, yields sequences of neural networks with realizations that converge rate-optimally to the exact solution of the first order system LSQ formulation.
翻译:我们提出了一种概念框架,用于通过深度神经网络数值求解欧几里得空间中有界多面体域上的线性椭圆型、抛物型和双曲型偏微分方程。该方法将偏微分方程重构为在深度神经网络的参数族上,最小化一个等价的、适定的一阶系统的最小二乘残差。所关联的最小二乘残差具有以下特性:a) 等于或正比于偏微分方程的弱残差;b) 在由局部化子网络贡献的意义上是可加的,这指示了神经网络相对于偏微分方程残差的局部“非平衡”状态;c) 可作为神经网络训练中的数值损失函数;以及 d) 即使在训练不完全的情况下,也能构成自适应最小二乘有限元方法背景下一种可计算的、(拟)最优数值误差估计器。此外,本文提出了一种自适应神经网络增长策略,该策略假设能精确数值最小化最小二乘损失泛函,从而产生一系列神经网络,其实现能以速率最优的方式收敛于一阶系统最小二乘公式的精确解。