Graph Neural Networks (GNNs) have demonstrated remarkable proficiency in modeling data with graph structures, yet recent research reveals their susceptibility to adversarial attacks. Traditional attack methodologies, which rely on manipulating the original graph or adding links to artificially created nodes, often prove impractical in real-world settings. This paper introduces a novel adversarial scenario involving the injection of an isolated subgraph to deceive both the link recommender and the node classifier within a GNN system. Specifically, the link recommender is mislead to propose links between targeted victim nodes and the subgraph, encouraging users to unintentionally establish connections and that would degrade the node classification accuracy, thereby facilitating a successful attack. To address this, we present the LiSA framework, which employs a dual surrogate model and bi-level optimization to simultaneously meet two adversarial objectives. Extensive experiments on real-world datasets demonstrate the effectiveness of our method.
翻译:图神经网络(GNNs)在建模图结构数据方面展现出卓越能力,但近期研究表明其易受对抗性攻击。传统攻击方法依赖于操纵原始图结构或向人工创建的节点添加链接,这在现实场景中往往不切实际。本文提出一种新颖的对抗场景,通过注入孤立子图来欺骗GNN系统中的链接推荐器和节点分类器。具体而言,攻击者误导链接推荐器在目标受害节点与注入子图之间建立连接推荐,诱使用户无意中建立连接,从而降低节点分类精度以实现攻击。为此,我们提出LiSA框架,采用双重代理模型和双层优化技术来同步满足两个对抗目标。在真实数据集上的大量实验验证了本方法的有效性。