As graphs scale to billions of nodes and edges, graph Machine Learning workloads are constrained by the cost of multi-hop traversals over exponentially growing neighborhoods. While various system-level and algorithmic optimizations have been proposed to accelerate Graph Neural Network (GNN) pipelines, data management and movement remain the primary bottlenecks at scale. In this paper, we explore whether graph sparsification, a well-established technique that reduces edges to create sparser neighborhoods, can serve as a lightweight pre-processing step to address these bottlenecks while preserving accuracy on node classification tasks. We develop an extensible experimental framework that enables systematic evaluation of how different sparsification methods affect the performance and accuracy of GNN models. We conduct the first comprehensive study of GNN training and inference on sparsified graphs, revealing several key findings. First, sparsification often preserves or even improves predictive performance. As an example, random sparsification raises the accuracy of the GAT model by 6.8% on the PubMed graph. Second, benefits increase with scale, substantially accelerating both training and inference. Our results show that the K-Neighbor sparsifier improves model serving performance on the Products graph by 11.7x with only a 0.7% accuracy drop. Importantly, we find that the computational overhead of sparsification is quickly amortized, making it practical for very large graphs.
翻译:随着图规模扩展至数十亿节点和边,图机器学习工作负载受到指数增长邻域的多跳遍历成本限制。尽管已提出多种系统级和算法优化来加速图神经网络(GNN)流水线,数据管理与移动仍是规模化场景下的主要瓶颈。本文探讨图稀疏化——一种通过减少边来创建更稀疏邻域的成熟技术——能否作为轻量级预处理步骤来解决这些瓶颈,同时在节点分类任务中保持准确性。我们开发了可扩展的实验框架,能够系统评估不同稀疏化方法如何影响GNN模型的性能与精度。我们首次对稀疏化图上的GNN训练与推理进行了全面研究,揭示了若干关键发现:首先,稀疏化往往能保持甚至提升预测性能。例如在PubMed图上,随机稀疏化使GAT模型准确率提升6.8%。其次,效益随规模扩大而增强,显著加速训练与推理过程。实验结果表明,K-Neighbor稀疏化器在Products图上将模型服务性能提升11.7倍,而精度仅下降0.7%。重要的是,我们发现稀疏化的计算开销能快速摊销,使其对超大规模图具有实际可行性。