We present a new angle on the expressive power of graph neural networks (GNNs) by studying how the predictions of real-valued GNN classifiers, such as those classifying graphs probabilistically, evolve as we apply them on larger graphs drawn from some random graph model. We show that the output converges to a constant function, which upper-bounds what these classifiers can uniformly express. This strong convergence phenomenon applies to a very wide class of GNNs, including state of the art models, with aggregates including mean and the attention-based mechanism of graph transformers. Our results apply to a broad class of random graph models, including sparse and dense variants of the Erd\H{o}s-R\'enyi model, the stochastic block model, and the Barab\'asi-Albert model. We empirically validate these findings, observing that the convergence phenomenon appears not only on random graphs but also on some real-world graphs.
翻译:本文从全新视角探讨图神经网络(GNN)的表达能力,通过研究实值GNN分类器(例如对图进行概率分类的模型)在从随机图模型中抽取的更大规模图上的预测演变规律。我们证明其输出会收敛至常数函数,这为这类分类器所能一致表达的能力提供了上界。这种强收敛现象适用于非常广泛的GNN类别,包括最先进的模型,其聚合机制涵盖均值聚合与基于注意力的图Transformer机制。我们的结论适用于广泛的随机图模型,包括Erd\H{o}s-R\'enyi模型的稀疏与稠密变体、随机块模型以及Barab\'asi-Albert模型。我们通过实验验证了这些发现,观察到收敛现象不仅出现在随机图上,在某些现实世界图数据中同样存在。