Graph Neural Networks (GNNs) demonstrate superior performance in various graph learning tasks, yet their wider real-world application is hindered by the computational overhead when applied to large-scale graphs. To address the issue, the Graph Lottery Hypothesis (GLT) has been proposed, advocating the identification of subgraphs and subnetworks, \textit{i.e.}, winning tickets, without compromising performance. The effectiveness of current GLT methods largely stems from the use of iterative magnitude pruning (IMP), which offers higher stability and better performance than one-shot pruning. However, identifying GLTs is highly computationally expensive, due to the iterative pruning and retraining required by IMP. In this paper, we reevaluate the correlation between one-shot pruning and IMP: while one-shot tickets are suboptimal compared to IMP, they offer a \textit{fast track} to tickets with a stronger performance. We introduce a one-shot pruning and denoising framework to validate the efficacy of the \textit{fast track}. Compared to current IMP-based GLT methods, our framework achieves a double-win situation of graph lottery tickets with \textbf{higher sparsity} and \textbf{faster speeds}. Through extensive experiments across 4 backbones and 6 datasets, our method demonstrates $1.32\% - 45.62\%$ improvement in weight sparsity and a $7.49\% - 22.71\%$ increase in graph sparsity, along with a $1.7-44 \times$ speedup over IMP-based methods and $95.3\%-98.6\%$ MAC savings.
翻译:图神经网络(GNNs)在各种图学习任务中展现出卓越性能,但其在大规模图上的计算开销阻碍了更广泛的现实应用。为解决这一问题,图彩票假说(GLT)被提出,主张在不损失性能的前提下识别子图与子网络,即“中奖彩票”。现有GLT方法的有效性主要源于迭代幅度剪枝(IMP)的使用,其相比一次性剪枝具有更高的稳定性和更好的性能。然而,由于IMP需要迭代剪枝与重训练,识别GLT的计算成本极高。本文重新评估了一次性剪枝与IMP之间的关联:虽然一次性彩票相比IMP次优,但它们为获得性能更强的彩票提供了一条“快速通道”。我们引入了一个一次性剪枝与去噪框架来验证此“快速通道”的有效性。与当前基于IMP的GLT方法相比,我们的框架实现了图彩票在**更高稀疏度**与**更快速度**上的双赢。通过在4种骨干网络和6个数据集上的大量实验,我们的方法在权重稀疏度上实现了$1.32\% - 45.62\%$的提升,在图稀疏度上实现了$7.49\% - 22.71\%$的提升,同时相比基于IMP的方法获得了$1.7-44$倍的加速以及$95.3\%-98.6\%$的MAC运算节省。