The curse-of-dimensionality taxes computational resources heavily with exponentially increasing computational cost as the dimension increases. This poses great challenges in solving high-dimensional PDEs, as Richard E. Bellman first pointed out over 60 years ago. While there has been some recent success in solving numerically partial differential equations (PDEs) in high dimensions, such computations are prohibitively expensive, and true scaling of general nonlinear PDEs to high dimensions has never been achieved. We develop a new method of scaling up physics-informed neural networks (PINNs) to solve arbitrary high-dimensional PDEs. The new method, called Stochastic Dimension Gradient Descent (SDGD), decomposes a gradient of PDEs into pieces corresponding to different dimensions and randomly samples a subset of these dimensional pieces in each iteration of training PINNs. We prove theoretically the convergence and other desired properties of the proposed method. We demonstrate in various diverse tests that the proposed method can solve many notoriously hard high-dimensional PDEs, including the Hamilton-Jacobi-Bellman (HJB) and the Schr\"{o}dinger equations in tens of thousands of dimensions very fast on a single GPU using the PINNs mesh-free approach. Notably, we solve nonlinear PDEs with nontrivial, anisotropic, and inseparable solutions in 100,000 effective dimensions in 12 hours on a single GPU using SDGD with PINNs. Since SDGD is a general training methodology of PINNs, it can be applied to any current and future variants of PINNs to scale them up for arbitrary high-dimensional PDEs.
翻译:维数灾难随着维度的增加,计算成本呈指数级增长,严重消耗计算资源。正如理查德·E·贝尔曼在60多年前首次指出的那样,这给高维偏微分方程的求解带来了巨大挑战。尽管近年来在高维偏微分方程数值求解方面取得了一些进展,但此类计算代价高昂,并且通用非线性偏微分方程的真正高维扩展从未实现。我们开发了一种新方法,用于扩展物理信息神经网络以求解任意高维偏微分方程。该方法称为随机维度梯度下降(Stochastic Dimension Gradient Descent, SDGD),它将偏微分方程的梯度分解为对应不同维度的片段,并在每次训练物理信息神经网络的迭代中随机采样这些维度片段的一个子集。我们从理论上证明了所提方法的收敛性及其他理想性质。我们通过多种不同的测试证明,该方法能够非常快速地求解许多公认困难的高维偏微分方程,包括在单个GPU上使用物理信息神经网络的无网格方法求解数万维的哈密顿-雅可比-贝尔曼方程和薛定谔方程。特别地,我们使用SDGD与PINN在单个GPU上仅用12小时就求解了有效维度达100,000维、具有非平凡、各向异性且不可分离解的非线性偏微分方程。由于SDGD是PINN的一种通用训练方法,它可以应用于当前及未来任何PINN变体,以将其扩展到任意高维偏微分方程的求解中。