Graph Neural Networks (GNNs) are vital in data science but are increasingly susceptible to adversarial attacks. To help researchers develop more robust GNN models, it's essential to focus on designing strong attack models as foundational benchmarks and guiding references. Among adversarial attacks, gray-box poisoning attacks are noteworthy due to their effectiveness and fewer constraints. These attacks exploit GNNs' need for retraining on updated data, thereby impacting their performance by perturbing these datasets. However, current research overlooks the real-world scenario of incomplete graphs.To address this gap, we introduce the Robust Incomplete Deep Attack Framework (RIDA). It is the first algorithm for robust gray-box poisoning attacks on incomplete graphs. The approach innovatively aggregates distant vertex information and ensures powerful data utilization.Extensive tests against 9 SOTA baselines on 3 real-world datasets demonstrate RIDA's superiority in handling incompleteness and high attack performance on the incomplete graph.
翻译:图神经网络(GNNs)在数据科学中至关重要,但日益容易受到对抗性攻击。为帮助研究人员开发更鲁棒的GNN模型,必须聚焦于设计强大的攻击模型作为基础基准和指导参考。在各类对抗性攻击中,灰盒投毒攻击因其高效性和较少约束而备受关注。这类攻击利用GNN需要对更新数据进行重新训练的特性,通过扰动这些数据集来影响其性能。然而,现有研究忽略了现实世界中普遍存在的不完全图场景。为填补这一空白,我们提出了鲁棒不完全深度攻击框架(RIDA)。这是首个针对不完全图的鲁棒灰盒投毒攻击算法。该方法创新性地聚合了远距离顶点信息,并确保了强大的数据利用率。在3个真实数据集上对9个最先进基线的广泛测试表明,RIDA在处理不完全性方面具有优越性,并在不完全图上实现了高攻击性能。