Graph neural network (GNN), the mainstream method to learn on graph data, is vulnerable to graph evasion attacks, where an attacker slightly perturbing the graph structure can fool trained GNN models. Existing work has at least one of the following drawbacks: 1) limited to directly attack two-layer GNNs; 2) inefficient; and 3) impractical, as they need to know full or part of GNN model parameters. We address the above drawbacks and propose an influence-based \emph{efficient, direct, and restricted black-box} evasion attack to \emph{any-layer} GNNs. Specifically, we first introduce two influence functions, i.e., feature-label influence and label influence, that are defined on GNNs and label propagation (LP), respectively. Then we observe that GNNs and LP are strongly connected in terms of our defined influences. Based on this, we can then reformulate the evasion attack to GNNs as calculating label influence on LP, which is \emph{inherently} applicable to any-layer GNNs, while no need to know information about the internal GNN model. Finally, we propose an efficient algorithm to calculate label influence. Experimental results on various graph datasets show that, compared to state-of-the-art white-box attacks, our attack can achieve comparable attack performance, but has a 5-50x speedup when attacking two-layer GNNs. Moreover, our attack is effective to attack multi-layer GNNs\footnote{Source code and full version is in the link: \url{https://github.com/ventr1c/InfAttack}}.
翻译:图神经网络(GNN)作为图数据学习的主流方法,易受图逃避攻击——攻击者通过轻微扰动图结构即可欺骗已训练的GNN模型。现有工作至少存在以下缺陷:1)仅能直接攻击两层GNN;2)效率低下;3)不实用,因其需要知晓完整或部分GNN模型参数。针对上述缺陷,我们提出一种基于影响函数的**高效、直接且受限的黑盒**逃避攻击,适用于**任意层**GNN。具体而言,我们首先引入了两种影响函数——特征标签影响与标签影响,分别定义于GNN和标签传播(LP)之上。接着观察到,在我们定义的影响框架下,GNN与LP存在强关联性。基于此,我们可将对GNN的逃避攻击重构为计算LP上的标签影响,该方法**天然**适用于任意层GNN,且无需知晓内部GNN模型的任何信息。最后,我们提出一种高效算法计算标签影响。在多种图数据集上的实验结果表明,与最先进的白盒攻击相比,我们的攻击在攻击两层GNN时可达到相近的攻击性能,但速度提升5-50倍。此外,该攻击对多层GNN同样有效\footnote{源代码及完整版本链接:\url{https://github.com/ventr1c/InfAttack}}。