Graph Neural Networks(GNNs) are vulnerable to adversarial attack that cause performance degradation by adding small perturbations to the graph. Gradient-based attacks are one of the most commonly used methods and have achieved good performance in many attack scenarios. However, current gradient attacks face the problems of easy to fall into local optima and poor attack invisibility. Specifically, most gradient attacks use greedy strategies to generate perturbations, which tend to fall into local optima leading to underperformance of the attack. In addition, many attacks only consider the effectiveness of the attack and ignore the invisibility of the attack, making the attacks easily exposed leading to failure. To address the above problems, this paper proposes an attack on GNNs, called AGSOA, which consists of an average gradient calculation and a structre optimization module. In the average gradient calculation module, we compute the average of the gradient information over all moments to guide the attack to generate perturbed edges, which stabilizes the direction of the attack update and gets rid of undesirable local maxima. In the structure optimization module, we calculate the similarity and homogeneity of the target node's with other nodes to adjust the graph structure so as to improve the invisibility and transferability of the attack. Extensive experiments on three commonly used datasets show that AGSOA improves the misclassification rate by 2$\%$-8$\%$ compared to other state-of-the-art models.
翻译:图神经网络(GNNs)易受对抗性攻击的影响,此类攻击通过对图添加微小扰动导致模型性能下降。基于梯度的攻击是最常用的方法之一,已在多种攻击场景中取得良好效果。然而,当前梯度攻击面临易陷入局部最优与攻击隐蔽性不足的问题。具体而言,大多数梯度攻击采用贪心策略生成扰动,容易陷入局部最优导致攻击效果不佳。此外,许多攻击仅考虑攻击有效性而忽视攻击隐蔽性,使得攻击行为易被察觉而导致失败。针对上述问题,本文提出一种针对GNN的攻击方法AGSOA,其包含平均梯度计算与结构优化两个模块。在平均梯度计算模块中,我们计算所有时刻梯度信息的平均值来指导攻击生成扰动边,从而稳定攻击更新方向并摆脱不良局部极大值。在结构优化模块中,我们计算目标节点与其他节点的相似度及同质性以调整图结构,从而提升攻击的隐蔽性与可迁移性。在三个常用数据集上的大量实验表明,AGSOA相较于其他前沿模型将误分类率提升了2$\%$-8$\%$。