Graph neural networks (GNNs) have gained popularity for various graph-related tasks. However, similar to deep neural networks, GNNs are also vulnerable to adversarial attacks. Empirical studies have shown that adversarially robust generalization has a pivotal role in establishing effective defense algorithms against adversarial attacks. In this paper, we contribute by providing adversarially robust generalization bounds for two kinds of popular GNNs, graph convolutional network (GCN) and message passing graph neural network, using the PAC-Bayesian framework. Our result reveals that spectral norm of the diffusion matrix on the graph and spectral norm of the weights as well as the perturbation factor govern the robust generalization bounds of both models. Our bounds are nontrivial generalizations of the results developed in (Liao et al., 2020) from the standard setting to adversarial setting while avoiding exponential dependence of the maximum node degree. As corollaries, we derive better PAC-Bayesian robust generalization bounds for GCN in the standard setting, which improve the bounds in (Liao et al., 2020) by avoiding exponential dependence on the maximum node degree.
翻译:图神经网络(GNNs)在各种图相关任务中日益普及。然而,与深度神经网络类似,GNNs同样易受对抗攻击。实证研究表明,对抗鲁棒泛化对于建立有效的对抗攻击防御算法具有关键作用。本文基于PAC-贝叶斯框架,为两种流行的图神经网络——图卷积网络(GCN)与消息传递图神经网络——提供了对抗鲁棒泛化界。我们的结果表明,图扩散矩阵的谱范数、权重矩阵的谱范数以及扰动因子共同决定了两种模型的鲁棒泛化界。所得边界是(Liao et al., 2020)中标准场景结果向对抗场景的非平凡推广,同时避免了与最大节点度的指数依赖关系。作为推论,我们在标准场景下为GCN推导了更优的PAC-贝叶斯鲁棒泛化界,通过消除对最大节点度的指数依赖改进了(Liao et al., 2020)中的边界。