Large-scale numerical simulations often come at the expense of daunting computations. High-Performance Computing has enhanced the process, but adapting legacy codes to leverage parallel GPU computations remains challenging. Meanwhile, Machine Learning models can harness GPU computations effectively but often struggle with generalization and accuracy. Graph Neural Networks (GNNs), in particular, are great for learning from unstructured data like meshes but are often limited to small-scale problems. Moreover, the capabilities of the trained model usually restrict the accuracy of the data-driven solution. To benefit from both worlds, this paper introduces a novel preconditioner integrating a GNN model within a multi-level Domain Decomposition framework. The proposed GNN-based preconditioner is used to enhance the efficiency of a Krylov method, resulting in a hybrid solver that can converge with any desired level of accuracy. The efficiency of the Krylov method greatly benefits from the GNN preconditioner, which is adaptable to meshes of any size and shape, is executed on GPUs, and features a multi-level approach to enforce the scalability of the entire process. Several experiments are conducted to validate the numerical behavior of the hybrid solver, and an in-depth analysis of its performance is proposed to assess its competitiveness against a C++ legacy solver.
翻译:大规模数值模拟往往伴随着高昂的计算成本。高性能计算虽已优化了这一过程,但将传统代码适配为利用并行GPU计算仍具挑战性。而机器学习模型虽能高效利用GPU计算,却常面临泛化性与精度的局限。特别地,图神经网络(GNN)擅长从网格等非结构化数据中学习,但通常局限于小规模问题,且训练模型的能力通常制约着数据驱动解的精度。为兼取两者优势,本文提出一种新型预处理器,将GNN模型集成于多层级区域分解框架中。该基于GNN的预处理器用于提升Krylov方法的效率,形成可收敛至任意精度要求的混合求解器。Krylov方法的效率得益于GNN预处理器的适应性:该预处理器可适配任意尺寸与形状的网格,运行于GPU,并通过多层级策略保障整体过程的可扩展性。通过多项实验验证混合求解器的数值行为,并对其性能进行深入分析,以评估其相较于C++传统求解器的竞争力。