Graph Neural Networks (GNNs) are gaining popularity across various domains due to their effectiveness in learning graph-structured data. Nevertheless, they have been shown to be susceptible to backdoor poisoning attacks, which pose serious threats to real-world applications. Meanwhile, graph reduction techniques, including coarsening and sparsification, which have long been employed to improve the scalability of large graph computational tasks, have recently emerged as effective methods for accelerating GNN training on large-scale graphs. However, the current development and deployment of graph reduction techniques for large graphs overlook the potential risks of data poisoning attacks against GNNs. It is not yet clear how graph reduction interacts with existing backdoor attacks. This paper conducts a thorough examination of the robustness of graph reduction methods in scalable GNN training in the presence of state-of-the-art backdoor attacks. We performed a comprehensive robustness analysis across six coarsening methods and six sparsification methods for graph reduction, under three GNN backdoor attacks against three GNN architectures. Our findings indicate that the effectiveness of graph reduction methods in mitigating attack success rates varies significantly, with some methods even exacerbating the attacks. Through detailed analyses of triggers and poisoned nodes, we interpret our findings and enhance our understanding of how graph reduction interacts with backdoor attacks. These results highlight the critical need for incorporating robustness considerations in graph reduction for GNN training, ensuring that enhancements in computational efficiency do not compromise the security of GNN systems.
翻译:图神经网络(GNNs)因其在图结构数据学习中的有效性,在各个领域日益普及。然而,研究表明它们易受后门投毒攻击的影响,这对实际应用构成了严重威胁。与此同时,长期以来用于提升大规模图计算任务可扩展性的图缩减技术(包括图粗化和图稀疏化),最近已成为加速大规模图上GNN训练的有效方法。然而,当前针对大图的图缩减技术的开发与部署,忽视了针对GNN的数据投毒攻击的潜在风险。图缩减技术与现有后门攻击如何相互作用尚不明确。本文在存在先进后门攻击的情况下,对可扩展GNN训练中图缩减方法的鲁棒性进行了深入考察。我们在三种针对三种GNN架构的后门攻击下,对六种图粗化方法和六种图稀疏化方法进行了全面的鲁棒性分析。我们的研究结果表明,图缩减方法在降低攻击成功率方面的有效性差异显著,某些方法甚至加剧了攻击。通过对触发器和中毒节点的详细分析,我们解释了研究发现,并加深了对图缩减与后门攻击相互作用机制的理解。这些结果突出表明,在GNN训练的图缩减过程中,亟需纳入鲁棒性考量,以确保计算效率的提升不会损害GNN系统的安全性。