Graph Neural Networks (GNNs) are gaining popularity across various domains due to their effectiveness in learning graph-structured data. Nevertheless, they have been shown to be susceptible to backdoor poisoning attacks, which pose serious threats to real-world applications. Meanwhile, graph reduction techniques, including coarsening and sparsification, which have long been employed to improve the scalability of large graph computational tasks, have recently emerged as effective methods for accelerating GNN training on large-scale graphs. However, the current development and deployment of graph reduction techniques for large graphs overlook the potential risks of data poisoning attacks against GNNs. It is not yet clear how graph reduction interacts with existing backdoor attacks. This paper conducts a thorough examination of the robustness of graph reduction methods in scalable GNN training in the presence of state-of-the-art backdoor attacks. We performed a comprehensive robustness analysis across six coarsening methods and six sparsification methods for graph reduction, under three GNN backdoor attacks against three GNN architectures. Our findings indicate that the effectiveness of graph reduction methods in mitigating attack success rates varies significantly, with some methods even exacerbating the attacks. Through detailed analyses of triggers and poisoned nodes, we interpret our findings and enhance our understanding of how graph reduction influences robustness against backdoor attacks. These results highlight the critical need for incorporating robustness considerations in graph reduction for GNN training, ensuring that enhancements in computational efficiency do not compromise the security of GNN systems.
翻译:图神经网络(GNNs)因其在图结构数据学习中的有效性,正日益广泛应用于各个领域。然而,研究表明它们容易受到后门投毒攻击,这对实际应用构成了严重威胁。与此同时,长期以来用于提升大规模图计算任务可扩展性的图缩减技术(包括粗化与稀疏化方法),近年来已成为加速大规模图上GNN训练的有效手段。然而,当前针对大规模图的图缩减技术的开发与部署忽视了其可能面临的GNN数据投毒攻击风险。图缩减技术与现有后门攻击之间的相互作用尚不明确。本文系统研究了在存在先进后门攻击的情况下,可扩展GNN训练中图缩减方法的鲁棒性。我们在三种针对不同GNN架构的后门攻击场景下,对六种粗化方法与六种稀疏化方法进行了全面的鲁棒性分析。研究结果表明,不同图缩减方法在降低攻击成功率方面的效果差异显著,某些方法甚至可能加剧攻击效果。通过对触发模式与中毒节点的深入分析,我们阐释了研究发现,深化了对图缩减如何影响后门攻击鲁棒性的理解。这些结果突出表明,在GNN训练的图缩减过程中必须充分考虑鲁棒性因素,以确保计算效率的提升不会损害GNN系统的安全性。