Graph neural networks (GNNs) are a widely used class of machine learning models for graph-structured data, based on local aggregation over neighbors. GNNs have close connections to logic. In particular, their expressive power is linked to that of modal logics and bounded-variable logics with counting. In many practical scenarios, graphs processed by GNNs have node features that act as unique identifiers. In this work, we study how such identifiers affect the expressive power of GNNs. We initiate a study of the key-invariant expressive power of GNNs, inspired by the notion of order-invariant definability in finite model theory: which node queries that depend only on the underlying graph structure can GNNs express on graphs with unique node identifiers? We provide answers for various classes of GNNs with local max- or sum-aggregation.
翻译:图神经网络(GNNs)是一类广泛应用于图结构数据的机器学习模型,其核心机制基于对邻居节点的局部聚合。GNNs与逻辑学存在紧密联系,其表达能力尤其与模态逻辑及带计数的有界变量逻辑的表达能力相关联。在许多实际场景中,GNNs处理的图数据包含可作为唯一标识符的节点特征。本研究旨在探究此类标识符如何影响GNNs的表达能力。受有限模型理论中顺序不变可定义性概念的启发,我们首次系统研究了GNNs在关键不变性约束下的表达能力:对于具有唯一节点标识符的图结构,GNNs能够表达哪些仅依赖于底层图结构的节点查询?针对采用局部最大值聚合或求和聚合的多类GNNs,我们给出了相应的理论解答。