Existing studies have shown that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks. Even though Graph Transformers (GTs) surpassed Message-Passing GNNs on several benchmarks, their adversarial robustness properties are unexplored. However, attacking GTs is challenging due to their Positional Encodings (PEs) and special attention mechanisms which can be difficult to differentiate. We overcome these challenges by targeting three representative architectures based on (1) random-walk PEs, (2) pair-wise-shortest-path PEs, and (3) spectral PEs - and propose the first adaptive attacks for GTs. We leverage our attacks to evaluate robustness to (a) structure perturbations on node classification; and (b) node injection attacks for (fake-news) graph classification. Our evaluation reveals that they can be catastrophically fragile and underlines our work's importance and the necessity for adaptive attacks.
翻译:现有研究表明,图神经网络(GNNs)易受对抗攻击。尽管图Transformer(GTs)在多个基准测试中超越了消息传递GNNs,但其对抗鲁棒性尚未得到探索。然而,由于图Transformer的位置编码(PEs)和特殊的注意力机制难以微分,对其进行攻击具有挑战性。我们通过针对三种基于(1)随机游走PEs、(2)成对最短路径PEs和(3)谱PEs的代表性架构,克服了这些挑战,并提出了首个针对图Transformer的自适应攻击方法。我们利用这些攻击评估了(a)节点分类任务中针对结构扰动的鲁棒性,以及(b)(假新闻)图分类任务中针对节点注入攻击的鲁棒性。我们的评估揭示了它们可能具有灾难性的脆弱性,强调了本工作的重要性以及自适应攻击的必要性。