Approximation of solutions to partial differential equations (PDE) is an important problem in computational science and engineering. Using neural networks as an ansatz for the solution has proven a challenge in terms of training time and approximation accuracy. In this contribution, we discuss how sampling the hidden weights and biases of the ansatz network from data-agnostic and data-dependent probability distributions allows us to progress on both challenges. In most examples, the random sampling schemes outperform iterative, gradient-based optimization of physics-informed neural networks regarding training time and accuracy by several orders of magnitude. For time-dependent PDE, we construct neural basis functions only in the spatial domain and then solve the associated ordinary differential equation with classical methods from scientific computing over a long time horizon. This alleviates one of the greatest challenges for neural PDE solvers because it does not require us to parameterize the solution in time. For second-order elliptic PDE in Barron spaces, we prove the existence of sampled networks with $L^2$ convergence to the solution. We demonstrate our approach on several time-dependent and static PDEs. We also illustrate how sampled networks can effectively solve inverse problems in this setting. Benefits compared to common numerical schemes include spectral convergence and mesh-free construction of basis functions.
翻译:偏微分方程解的逼近是计算科学与工程中的一个重要问题。使用神经网络作为解的试探函数在训练时间和逼近精度方面已被证明具有挑战性。本文中,我们讨论了如何从数据无关和数据依赖的概率分布中采样试探网络的隐藏权重与偏置,从而使我们能够在这两个挑战上取得进展。在大多数算例中,这种随机采样方案在训练时间和精度上比基于梯度的物理信息神经网络迭代优化方法高出数个数量级。对于时间依赖的偏微分方程,我们仅在空间域构造神经基函数,随后采用科学计算中的经典方法在长时间尺度上求解对应的常微分方程。这缓解了神经偏微分方程求解器面临的最大挑战之一,因为它不需要我们对时间维度进行参数化。针对Barron空间中的二阶椭圆型偏微分方程,我们证明了采样网络存在以$L^2$范数收敛于解的可能性。我们在多个时间依赖与静态偏微分方程上验证了所提方法,并展示了采样神经网络在此框架下如何有效求解反问题。相较于传统数值格式,本方法的优势包括谱收敛特性及基函数的无网格构造特性。