In this study, we introduce a domain-decomposition-based distributed training and inference approach for message-passing neural networks (MPNN). Our objective is to address the challenge of scaling edge-based graph neural networks as the number of nodes increases. Through our distributed training approach, coupled with Nystr\"om-approximation sampling techniques, we present a scalable graph neural network, referred to as DS-MPNN (D and S standing for distributed and sampled, respectively), capable of scaling up to $O(10^5)$ nodes. We validate our sampling and distributed training approach on two cases: (a) a Darcy flow dataset and (b) steady RANS simulations of 2-D airfoils, providing comparisons with both single-GPU implementation and node-based graph convolution networks (GCNs). The DS-MPNN model demonstrates comparable accuracy to single-GPU implementation, can accommodate a significantly larger number of nodes compared to the single-GPU variant (S-MPNN), and significantly outperforms the node-based GCN.
翻译:本研究提出一种基于域分解的分布式训练与推理方法,用于消息传递神经网络(mpnn)。旨在解决基于边的图神经网络随节点数增加而面临的规模化挑战。通过结合Nyström近似采样技术的分布式训练方法,我们提出了一种可扩展图神经网络DS-MPNN(D和S分别代表分布式与采样),能够扩展至10⁵节点量级。我们在两个案例中验证了采样与分布式训练方法:(a)达西流数据集与(b)二维翼型稳态RANS模拟,并与单GPU实现及基于节点的图卷积网络(GCN)进行了对比。DS-MPNN模型在保持与单GPU实现相当精度的同时,较单GPU变体(S-MPNN)可容纳显著更多节点,且性能大幅优于基于节点的GCN。