Graph Neural Networks (GNNs) have demonstrated remarkable performance in a wide range of tasks, such as node classification, link prediction, and graph classification, by exploiting the structural information in graph-structured data. However, in node classification, computing node-level explainability becomes extremely time-consuming as the size of the graph increases, while batching strategies often degrade explanation quality. This paper introduces a novel approach to parallelizing node-level explainability in GNNs through graph partitioning. By decomposing the graph into disjoint subgraphs, we enable parallel computation of explainability for node neighbors, significantly improving the scalability and efficiency without affecting the correctness of the results, provided sufficient memory is available. For scenarios where memory is limited, we further propose a dropout-based reconstruction mechanism that offers a controllable trade-off between memory usage and explanation fidelity. Experimental results on real-world datasets demonstrate substantial speedups, enabling scalable and transparent explainability for large-scale GNN models.
翻译:图神经网络(GNNs)通过利用图结构数据中的结构信息,在节点分类、链接预测和图分类等广泛任务中展现出卓越性能。然而,在节点分类任务中,随着图规模的增大,计算节点级可解释性变得极其耗时,而批处理策略往往会降低解释质量。本文提出一种通过图划分实现GNN节点级可解释性并行化的新方法。通过将图分解为互不相交的子图,我们实现了节点邻居可解释性的并行计算,在保证内存充足的前提下,显著提升了方法的可扩展性和效率,且不影响结果的正确性。针对内存受限的场景,我们进一步提出一种基于丢弃机制的重建方法,可在内存使用量与解释保真度之间实现可控权衡。在真实数据集上的实验结果表明,该方法能带来显著的加速效果,从而为大规模GNN模型提供可扩展且透明的可解释性。