Graph neural networks (GNNs) have become the \textit{de facto} standard for representational learning in graphs, and have achieved state-of-the-art performance in many graph-related tasks; however, it has been shown that the expressive power of standard GNNs are equivalent maximally to 1-dimensional Weisfeiler-Lehman (1-WL) Test. Recently, there is a line of works aiming to enhance the expressive power of graph neural networks. One line of such works aim at developing $K$-hop message-passing GNNs where node representation is updated by aggregating information from not only direct neighbors but all neighbors within $K$-hop of the node. Another line of works leverages subgraph information to enhance the expressive power which is proven to be strictly more powerful than 1-WL test. In this work, we discuss the limitation of $K$-hop message-passing GNNs and propose \textit{substructure encoding function} to uplift the expressive power of any $K$-hop message-passing GNN. We further inject contextualized substructure information to enhance the expressiveness of $K$-hop message-passing GNNs. Our method is provably more powerful than previous works on $K$-hop graph neural networks and 1-WL subgraph GNNs, which is a specific type of subgraph based GNN models, and not less powerful than 3-WL. Empirically, our proposed method set new state-of-the-art performance or achieves comparable performance for a variety of datasets. Our code is available at \url{https://github.com/tianyao-aka/Expresive_K_hop_GNNs}.
翻译:图神经网络(GNNs)已成为图表示学习的事实标准,并在许多图相关任务中取得了最先进的性能;然而,研究表明标准GNNs的表达能力至多等价于一维Weisfeiler-Lehman(1-WL)测试。近期,一系列研究工作致力于提升图神经网络的表达能力。其中一类工作旨在开发$K$跳消息传递GNNs,其节点表示不仅通过聚合直接邻居信息,还通过聚合节点$K$跳内所有邻居信息来更新。另一类工作则利用子结构信息来增强表达能力,已被证明严格强于1-WL测试。本文中,我们讨论了$K$跳消息传递GNNs的局限性,并提出通过\textit{子结构编码函数}来提升任何$K$跳消息传递GNN的表达能力。我们进一步注入上下文子结构信息以增强$K$跳消息传递GNNs的表达力。我们的方法在理论上被证明比先前关于$K$跳图神经网络和1-WL子图GNNs(一种基于子图的特定GNN模型)的工作更强大,且不弱于3-WL。实证结果表明,我们提出的方法在多个数据集上取得了新的最先进性能或达到了可比性能。我们的代码发布于\url{https://github.com/tianyao-aka/Expresive_K_hop_GNNs}。