Understanding what graph neural networks can learn, especially their ability to learn to execute algorithms, remains a central theoretical challenge. In this work, we prove exact learnability results for graph algorithms under bounded-degree and finite-precision constraints. Our approach follows a two-step process. First, we train an ensemble of multi-layer perceptrons (MLPs) to execute the local instructions of a single node. Second, during inference, we use the trained MLP ensemble as the update function within a graph neural network (GNN). Leveraging Neural Tangent Kernel (NTK) theory, we show that local instructions can be learned from a small training set, enabling the complete graph algorithm to be executed during inference without error and with high probability. To illustrate the learning power of our setting, we establish a rigorous learnability result for the LOCAL model of distributed computation. We further demonstrate positive learnability results for widely studied algorithms such as message flooding, breadth-first and depth-first search, and Bellman-Ford.
翻译:理解图神经网络的学习能力,特别是其执行算法的能力,仍然是一个核心的理论挑战。在本工作中,我们证明了在有界度与有限精度约束下图算法的精确可学习性结果。我们的方法遵循一个两步流程:首先,我们训练一个多层感知机(MLP)集成来执行单个节点的局部指令;其次,在推理阶段,我们将训练好的MLP集成作为图神经网络(GNN)内部的更新函数。借助神经正切核(NTK)理论,我们证明了局部指令可以从一个小型训练集中学习得到,从而使得完整的图算法在推理过程中能够以高概率无误差地执行。为了阐明我们设置下的学习能力,我们为分布式计算的LOCAL模型建立了一个严格的可学习性结果。我们进一步展示了对于广泛研究的算法(如消息泛洪、广度优先与深度优先搜索以及Bellman-Ford算法)的积极可学习性结果。