We perform the first adversarial robustness study into Graph Neural Networks (GNNs) that are provably more powerful than traditional Message Passing Neural Networks (MPNNs). In particular, we use adversarial robustness as a tool to uncover a significant gap between their theoretically possible and empirically achieved expressive power. To do so, we focus on the ability of GNNs to count specific subgraph patterns, which is an established measure of expressivity, and extend the concept of adversarial robustness to this task. Based on this, we develop efficient adversarial attacks for subgraph counting and show that more powerful GNNs fail to generalize even to small perturbations to the graph's structure. Expanding on this, we show that such architectures also fail to count substructures on out-of-distribution graphs.
翻译:我们对理论上比传统消息传递神经网络(MPNNs)表达能力更强的图神经网络(GNNs)进行了首次对抗鲁棒性研究。具体而言,我们以对抗鲁棒性为工具,揭示了这类GNNs在理论可能达到的表达能力与其实际实现能力之间存在显著差距。为此,我们聚焦于GNNs对特定子图模式的计数能力——这是衡量表达能力的公认指标,并将对抗鲁棒性的概念扩展至该任务。基于此,我们开发了针对子图计数的高效对抗攻击方法,并证明即使面对图结构的微小扰动,这些表达能力更强的GNNs也未能保持泛化性能。进一步研究发现,此类架构在分布外图数据的子结构计数任务中同样存在失效现象。