Graph Neural Networks (GNNs) address two key challenges in applying deep learning to graph-structured data: they handle varying size input graphs and ensure invariance under graph isomorphism. While GNNs have demonstrated broad applicability, understanding their expressive power remains an important question. In this paper, we propose GNN architectures that correspond precisely to prominent fragments of first-order logic (FO), including various modal logics as well as more expressive two-variable fragments. To establish these results, we apply methods from finite model theory of first-order and modal logics to the domain of graph representation learning. Our results provide a unifying framework for understanding the logical expressiveness of GNNs within FO.
翻译:图神经网络(GNNs)解决了将深度学习应用于图结构数据时的两个关键挑战:它们处理可变大小的输入图,并确保在图同构下的不变性。尽管GNNs已展现出广泛的适用性,理解其表达能力仍是一个重要问题。本文提出了与一阶逻辑(FO)中重要片段精确对应的GNN架构,包括多种模态逻辑以及表达能力更强的双变量片段。为建立这些结果,我们将一阶逻辑与模态逻辑的有限模型论方法应用于图表示学习领域。我们的研究结果为理解GNN在一阶逻辑框架内的逻辑表达能力提供了统一的理论框架。