The expressive power of message-passing graph neural networks (MPNNs) is reasonably well understood, primarily through combinatorial techniques from graph isomorphism testing. However, MPNNs' generalization abilities -- making meaningful predictions beyond the training set -- remain less explored. Current generalization analyses often overlook graph structure, limit the focus to specific aggregation functions, and assume the impractical, hard-to-optimize $0$-$1$ loss function. Here, we extend recent advances in graph similarity theory to assess the influence of graph structure, aggregation, and loss functions on MPNNs' generalization abilities. Our empirical study supports our theoretical insights, improving our understanding of MPNNs' generalization properties.
翻译:消息传递图神经网络(MPNNs)的表达能力已得到较好理解,主要基于图同构测试中的组合技术。然而,MPNNs的泛化能力——即在训练集之外做出有意义的预测——仍较少被探索。当前的泛化分析往往忽略图结构,将关注点局限于特定的聚合函数,并假设使用不切实际且难以优化的0-1损失函数。本文通过扩展图相似性理论的最新进展,评估图结构、聚合函数和损失函数对MPNNs泛化能力的影响。我们的实证研究支持理论见解,深化了对MPNNs泛化特性的理解。