The expressive power of Graph Neural Networks (GNNs) is often analysed via correspondence to the Weisfeiler-Leman (WL) algorithm and fragments of first-order logic. Standard GNNs are limited to performing aggregation over immediate neighbourhoods or over global read-outs. To increase their expressivity, recent attempts have been made to incorporate substructural information (e.g. cycle counts and subgraph properties). In this paper, we formalize this architectural trend by introducing Template GNNs (T-GNNs), a generalized framework where node features are updated by aggregating over valid template embeddings from a specified set of graph templates. We propose a corresponding logic, Graded template modal logic (GML(T)), and generalized notions of template-based bisimulation and WL algorithm. We establish an equivalence between the expressive power of T-GNNs and GML(T), and provide a unifying approach for analysing GNN expressivity: we show how standard AC-GNNs and its recent variants can be interpreted as instantiations of T-GNNs.
翻译:图神经网络(GNN)的表达能力通常通过与Weisfeiler-Leman(WL)算法及一阶逻辑片段的对应关系进行分析。标准GNN局限于对直接邻域或全局读取进行聚合操作。为提升其表达能力,近期研究尝试引入子结构信息(如环计数与子图属性)。本文通过提出模板图神经网络(T-GNN)这一通用框架,将此类架构趋势形式化——该框架通过聚合指定图模板集合中的有效模板嵌入来更新节点特征。我们提出了对应的分级模板模态逻辑(GML(T)),以及基于模板的互模拟与WL算法的广义概念。我们建立了T-GNN与GML(T)在表达能力上的等价关系,并提供分析GNN表达能力的统一框架:论证了标准AC-GNN及其近期变体均可解释为T-GNN的具体实例。