Message-passing graph neural networks (MPNNs) have emerged as the leading approach for machine learning on graphs, attracting significant attention in recent years. While a large set of works explored the expressivity of MPNNs, i.e., their ability to separate graphs and approximate functions over them, comparatively less attention has been directed toward investigating their generalization abilities, i.e., making meaningful predictions beyond the training data. Here, we systematically review the existing literature on the generalization abilities of MPNNs. We analyze the strengths and limitations of various studies in these domains, providing insights into their methodologies and findings. Furthermore, we identify potential avenues for future research, aiming to deepen our understanding of the generalization abilities of MPNNs.
翻译:消息传递图神经网络(MPNNs)已成为图机器学习的主流方法,近年来受到广泛关注。尽管大量研究探讨了MPNNs的表达能力,即其区分图结构及逼近图上函数的能力,但对其泛化能力——即在训练数据之外做出有效预测的能力——的研究相对较少。本文系统梳理了现有关于MPNNs泛化能力的文献,分析了该领域各类研究的优势与局限,并对其方法论与研究发现进行了深入剖析。此外,本文指出了未来研究的潜在方向,旨在深化对MPNNs泛化能力的理解。