Graph condensation has recently emerged as a prevalent technique to improve the training efficiency for graph neural networks (GNNs). It condenses a large graph into a small one such that a GNN trained on this small synthetic graph can achieve comparable performance to a GNN trained on the large graph. However, while existing graph condensation studies mainly focus on the best trade-off between graph size and the GNNs' performance (model utility), they overlook the security issues of graph condensation. To bridge this gap, we first explore backdoor attack against the GNNs trained on the condensed graphs. We introduce an effective backdoor attack against graph condensation, termed BGC. This attack aims to (1) preserve the condensed graph quality despite trigger injection, and (2) ensure trigger efficacy through the condensation process, achieving a high attack success rate. Specifically, BGC consistently updates triggers during condensation and targets representative nodes for poisoning. Extensive experiments demonstrate the effectiveness of our attack. BGC achieves a high attack success rate (close to 1.0) and good model utility in all cases. Furthermore, the results against multiple defense methods demonstrate BGC's resilience under their defenses. Finally, we analyze the key hyperparameters that influence the attack performance. Our code is available at: https://github.com/JiahaoWuGit/BGC.
翻译:图压缩技术近年来已成为提升图神经网络训练效率的普遍方法。该技术将大规模图压缩为小型合成图,使得在此合成图上训练的图神经网络能够达到与在原大图上训练相当的性能。然而,现有图压缩研究主要关注图规模与模型性能之间的最佳权衡,却忽视了图压缩的安全性问题。为填补这一空白,我们首次探索针对压缩图训练的图神经网络的后门攻击。我们提出一种针对图压缩的有效后门攻击方法,命名为BGC。该攻击旨在:(1)在注入触发器的同时保持压缩图的质量;(2)通过压缩过程确保触发器有效性,实现高攻击成功率。具体而言,BGC在压缩过程中持续更新触发器,并针对代表性节点进行投毒。大量实验证明了我们攻击方法的有效性:BGC在所有情况下均实现了接近1.0的高攻击成功率与良好的模型效用。此外,针对多种防御方法的实验结果表明BGC能有效抵御这些防御。最后,我们分析了影响攻击性能的关键超参数。代码已开源:https://github.com/JiahaoWuGit/BGC。