Incomplete LU factorizations of sparse matrices are widely used as preconditioners in Krylov subspace methods to speed up solving linear systems. Unfortunately, computing the preconditioner itself can be time-consuming and sensitive to hyper-parameters. Instead, we replace the hand-engineered algorithm with a graph neural network that is trained to approximate the matrix factorization directly. To apply the output of the neural network as a preconditioner, we propose an output activation function that guarantees that the predicted factorization is invertible. Further, applying a graph neural network architecture allows us to ensure that the output itself is sparse which is desirable from a computational standpoint. We theoretically analyze and empirically evaluate different loss functions to train the learned preconditioners and show their effectiveness in decreasing the number of GMRES iterations and improving the spectral properties on synthetic data. The code is available at https://github.com/paulhausner/neural-incomplete-factorization.
翻译:稀疏矩阵的不完全LU分解作为预条件子在Krylov子空间方法中被广泛使用,以加速线性方程组的求解。然而,计算预条件子本身可能耗时且对超参数敏感。为此,我们采用图神经网络替代手工设计的算法,该网络经过训练可直接近似矩阵分解。为了将神经网络的输出作为预条件子应用,我们提出了一种输出激活函数,确保预测的分解是可逆的。此外,采用图神经网络架构使我们能够保证输出本身是稀疏的,这在计算角度上是可取的。我们从理论上分析和实证评估了训练学习型预条件子的不同损失函数,并展示了它们在减少GMRES迭代次数和改善合成数据谱特性方面的有效性。代码可在 https://github.com/paulhausner/neural-incomplete-factorization 获取。