In this paper, we present a novel method to significantly enhance the computational efficiency of Adaptive Spatial-Temporal Graph Neural Networks (ASTGNNs) by introducing the concept of the Graph Winning Ticket (GWT), derived from the Lottery Ticket Hypothesis (LTH). By adopting a pre-determined star topology as a GWT prior to training, we balance edge reduction with efficient information propagation, reducing computational demands while maintaining high model performance. Both the time and memory computational complexity of generating adaptive spatial-temporal graphs is significantly reduced from $\mathcal{O}(N^2)$ to $\mathcal{O}(N)$. Our approach streamlines the ASTGNN deployment by eliminating the need for exhaustive training, pruning, and retraining cycles, and demonstrates empirically across various datasets that it is possible to achieve comparable performance to full models with substantially lower computational costs. Specifically, our approach enables training ASTGNNs on the largest scale spatial-temporal dataset using a single A6000 equipped with 48 GB of memory, overcoming the out-of-memory issue encountered during original training and even achieving state-of-the-art performance. {Furthermore, we delve into the effectiveness of the GWT from the perspective of spectral graph theory, providing substantial theoretical support.} This advancement not only proves the existence of efficient sub-networks within ASTGNNs but also broadens the applicability of the LTH in resource-constrained settings, marking a significant step forward in the field of graph neural networks. Code is available at https://anonymous.4open.science/r/paper-1430.
翻译:本文提出一种新颖方法,通过引入源自彩票假说(LTH)的图优胜券(GWT)概念,显著提升自适应时空图神经网络(ASTGNNs)的计算效率。通过在训练前采用预定义的星型拓扑作为GWT,我们在减少边数量的同时实现高效信息传播,从而在保持模型高性能的前提下降低计算需求。生成自适应时空图的时间和内存计算复杂度从$\mathcal{O}(N^2)$大幅降低至$\mathcal{O}(N)$。该方法消除了繁琐的完整训练、剪枝与再训练循环,简化了ASTGNN的部署流程,并在多个数据集上的实验表明,它能够以显著降低的计算成本达到与完整模型相当的性能。具体而言,我们的方法使ASTGNN能够在配备48GB显存的单个A6000 GPU上训练最大规模的时空数据集,克服了原始训练中遇到的内存溢出问题,甚至实现了最先进的性能。此外,我们从谱图理论角度深入探讨了GWT的有效性,提供了坚实的理论支撑。这一进展不仅证明了ASTGNN中存在高效的子网络,还拓展了LTH在资源受限环境中的适用性,标志着图神经网络领域的重大突破。代码见https://anonymous.4open.science/r/paper-1430。