We propose a scalable preconditioned primal-dual hybrid gradient algorithm for solving partial differential equations (PDEs). We multiply the PDE with a dual test function to obtain an inf-sup problem whose loss functional involves lower-order differential operators. The Primal-Dual Hybrid Gradient (PDHG) algorithm is then leveraged for this saddle point problem. By introducing suitable precondition operators to the proximal steps in the PDHG algorithm, we obtain an alternative natural gradient ascent-descent optimization scheme for updating the neural network parameters. We apply the Krylov subspace method (MINRES) to evaluate the natural gradients efficiently. Such treatment readily handles the inversion of precondition matrices via matrix-vector multiplication. A posterior convergence analysis is established for the time-continuous version of the proposed method. The algorithm is tested on various types of PDEs with dimensions ranging from $1$ to $50$, including linear and nonlinear elliptic equations, reaction-diffusion equations, and Monge-Amp\`ere equations stemming from the $L^2$ optimal transport problems. We compare the performance of the proposed method with several commonly used deep learning algorithms such as physics-informed neural networks (PINNs), the DeepRitz method, weak adversarial networks (WANs), etc, for solving PDEs using the Adam and L-BFGS optimizers. The numerical results suggest that the proposed method performs efficiently and robustly and converges more stably.
翻译:我们提出了一种可扩展的预处理原始-对偶混合梯度算法用于求解偏微分方程。通过将偏微分方程与对偶测试函数相乘,我们得到了一个损失泛函涉及低阶微分算子的inf-sup问题。针对该鞍点问题,我们利用原始-对偶混合梯度算法进行求解。通过在PDHG算法的邻近步中引入合适的预处理算子,我们获得了一种用于更新神经网络参数的自然梯度上升-下降优化方案。我们采用Krylov子空间方法(MINRES)来高效计算自然梯度。这种处理方式通过矩阵-向量乘法即可便捷地实现预处理矩阵的求逆运算。我们为所提出方法的连续时间版本建立了后验收敛性分析。该算法在维度从$1$到$50$的各类偏微分方程上进行了测试,包括线性和非线性椭圆方程、反应-扩散方程,以及源于$L^2$最优传输问题的Monge-Ampère方程。我们将所提方法与多种常用深度学习算法(如物理信息神经网络、DeepRitz方法、弱对抗网络等)在使用Adam和L-BFGS优化器求解偏微分方程时的性能进行了比较。数值结果表明,所提出的方法具有高效稳健的性能,且收敛过程更为稳定。