Preconditioning techniques are crucial for enhancing the efficiency of solving large-scale linear equation systems that arise from partial differential equation (PDE) discretization. These techniques, such as Incomplete Cholesky factorization (IC) and data-driven neural network methods, accelerate the convergence of iterative solvers like Conjugate Gradient (CG) by approximating the original matrices. This paper introduces a novel approach that integrates Graph Neural Network (GNN) with traditional IC, addressing the shortcomings of direct generation methods based on GNN and achieving significant improvements in computational efficiency and scalability. Experimental results demonstrate an average reduction in iteration counts by 24.8% compared to IC and a two-order-of-magnitude increase in training scale compared to previous methods. A three-dimensional static structural analysis utilizing finite element methods was validated on training sparse matrices of up to 5 million dimensions and inference scales of up to 10 million. Furthermore, the approach demon-strates robust generalization capabilities across scales, facilitating the effective acceleration of CG solvers for large-scale linear equations using small-scale data on modest hardware. The method's robustness and scalability make it a practical solution for computational science.
翻译:预条件技术对于提升由偏微分方程离散化产生的大规模线性方程组求解效率至关重要。诸如不完全Cholesky分解(IC)和数据驱动神经网络等方法,通过近似原始矩阵来加速共轭梯度(CG)等迭代求解器的收敛。本文提出一种将图神经网络(GNN)与传统IC相结合的新方法,解决了基于GNN的直接生成方法的不足,并在计算效率与可扩展性方面取得了显著提升。实验结果表明,与IC相比,迭代次数平均减少24.8%,且训练规模较先前方法提升两个数量级。通过有限元方法进行的三维静态结构分析,在高达500万维的训练稀疏矩阵及高达1000万维的推理规模上得到了验证。此外,该方法展现出跨尺度的强泛化能力,能够利用小规模数据在普通硬件上有效加速大规模线性方程的CG求解器。该方法的鲁棒性与可扩展性使其成为计算科学领域一种实用的解决方案。