The increasing amount of graph data places requirements on the efficient training of graph neural networks (GNNs). The emerging graph distillation (GD) tackles this challenge by distilling a small synthetic graph to replace the real large graph, ensuring GNNs trained on real and synthetic graphs exhibit comparable performance. However, existing methods rely on GNN-related information as supervision, including gradients, representations, and trajectories, which have two limitations. First, GNNs can affect the spectrum (i.e., eigenvalues) of the real graph, causing spectrum bias in the synthetic graph. Second, the variety of GNN architectures leads to the creation of different synthetic graphs, requiring traversal to obtain optimal performance. To tackle these issues, we propose Graph Distillation with Eigenbasis Matching (GDEM), which aligns the eigenbasis and node features of real and synthetic graphs. Meanwhile, it directly replicates the spectrum of the real graph and thus prevents the influence of GNNs. Moreover, we design a discrimination constraint to balance the effectiveness and generalization of GDEM. Theoretically, the synthetic graphs distilled by GDEM are restricted spectral approximations of the real graphs. Extensive experiments demonstrate that GDEM outperforms state-of-the-art GD methods with powerful cross-architecture generalization ability and significant distillation efficiency. Our code is available at https://github.com/liuyang-tian/GDEM.
翻译:随着图数据规模的日益增长,对高效训练图神经网络(GNNs)提出了迫切需求。新兴的图蒸馏(GD)技术通过蒸馏出小型合成图替代真实大规模图来应对这一挑战,确保基于真实图和合成图训练的GNNs具有相媲美的性能。然而现有方法依赖GNN相关信息的监督,包括梯度、表征和训练轨迹,存在两个局限:其一,GNN会影响真实图谱(即特征值),导致合成图产生谱偏差;其二,不同GNN架构会生成不同合成图,需遍历才能获得最优性能。针对这些问题,我们提出基于特征基匹配的图蒸馏(GDEM),通过对齐真实图与合成图的特征基及节点特征,同时直接复制真实图谱以消除GNN的影响。此外,我们设计了一种判别约束来平衡GDEM的有效性与泛化性。理论上,经GDEM蒸馏的合成图是真实图的受限谱逼近。大量实验表明,GDEM在蒸馏效率显著提升的同时,以强大的跨架构泛化能力超越了最先进的GD方法。我们的代码已开源在https://github.com/liuyang-tian/GDEM。