Graph Neural Networks (GNNs) have demonstrated remarkable proficiency in modeling data with graph structures, yet recent research reveals their susceptibility to adversarial attacks. Traditional attack methodologies, which rely on manipulating the original graph or adding links to artificially created nodes, often prove impractical in real-world settings. This paper introduces a novel adversarial scenario involving the injection of an isolated subgraph to deceive both the link recommender and the node classifier within a GNN system. Specifically, the link recommender is mislead to propose links between targeted victim nodes and the subgraph, encouraging users to unintentionally establish connections and that would degrade the node classification accuracy, thereby facilitating a successful attack. To address this, we present the LiSA framework, which employs a dual surrogate model and bi-level optimization to simultaneously meet two adversarial objectives. Extensive experiments on real-world datasets demonstrate the effectiveness of our method.
翻译:图神经网络(GNNs)在建模图结构数据方面展现出卓越能力,然而近期研究揭示了其对对抗性攻击的脆弱性。传统的攻击方法依赖于操纵原始图结构或向人工创建的节点添加链接,这些方法在实际场景中往往难以实施。本文提出了一种新颖的对抗场景,通过注入孤立子图来欺骗GNN系统中的链接推荐器和节点分类器。具体而言,该方法误导链接推荐器在目标受害节点与子图之间建议链接,促使用户无意中建立连接,从而降低节点分类准确率,最终实现成功攻击。为此,我们提出了LiSA框架,该框架采用双重代理模型和双层优化技术,以同时满足两个对抗目标。在真实数据集上的大量实验证明了我们方法的有效性。