In recent years, there has been widespread adoption of machine learning-based approaches to automate the solving of partial differential equations (PDEs). Among these approaches, Gaussian processes (GPs) and kernel methods have garnered considerable interest due to their flexibility, robust theoretical guarantees, and close ties to traditional methods. They can transform the solving of general nonlinear PDEs into solving quadratic optimization problems with nonlinear, PDE-induced constraints. However, the complexity bottleneck lies in computing with dense kernel matrices obtained from pointwise evaluations of the covariance kernel, and its \textit{partial derivatives}, a result of the PDE constraint and for which fast algorithms are scarce. The primary goal of this paper is to provide a near-linear complexity algorithm for working with such kernel matrices. We present a sparse Cholesky factorization algorithm for these matrices based on the near-sparsity of the Cholesky factor under a novel ordering of pointwise and derivative measurements. The near-sparsity is rigorously justified by directly connecting the factor to GP regression and exponential decay of basis functions in numerical homogenization. We then employ the Vecchia approximation of GPs, which is optimal in the Kullback-Leibler divergence, to compute the approximate factor. This enables us to compute $\epsilon$-approximate inverse Cholesky factors of the kernel matrices with complexity $O(N\log^d(N/\epsilon))$ in space and $O(N\log^{2d}(N/\epsilon))$ in time. We integrate sparse Cholesky factorizations into optimization algorithms to obtain fast solvers of the nonlinear PDE. We numerically illustrate our algorithm's near-linear space/time complexity for a broad class of nonlinear PDEs such as the nonlinear elliptic, Burgers, and Monge-Amp\`ere equations.
翻译:近年来,基于机器学习的方法在自动求解偏微分方程方面得到了广泛应用。其中,高斯过程与核方法因其灵活性、稳健的理论保证以及与经典方法的紧密联系而备受关注。它们可将一般非线性偏微分方程的求解转化为带非线性偏微分方程约束的二次优化问题。然而,计算瓶颈在于处理由协方差核逐点评估及其偏导数(该约束的产物且缺乏快速算法)所生成的稠密核矩阵。本文的主要目标是为处理此类核矩阵提供一种近线性复杂度算法。我们提出一种基于新颖的点测量与导数测量排序的稀疏Cholesky分解算法,其核心在于利用Cholesky因子的近稀疏性。通过直接将该因子与高斯过程回归及数值均质化中基函数的指数衰减特性相关联,我们严格论证了这种近稀疏性。随后,采用在Kullback-Leibler散度意义下最优的Vecchia近似计算近似因子,从而以空间复杂度$O(N\log^d(N/\epsilon))$和时间复杂度$O(N\log^{2d}(N/\epsilon))$获得核矩阵的$\epsilon$-近似逆Cholesky因子。我们将稀疏Cholesky分解集成到优化算法中,构建非线性偏微分方程的快速求解器。数值实验表明,该算法对非线性椭圆方程、Burgers方程及Monge-Ampère方程等广泛非线性偏微分方程均具有近线性的空间/时间复杂度。