Convolutional neural networks have been successfully extended to operate on graphs, giving rise to Graph Neural Networks (GNNs). GNNs combine information from adjacent nodes by successive applications of graph convolutions. GNNs have been implemented successfully in various learning tasks while the theoretical understanding of their generalization capability is still in progress. In this paper, we leverage manifold theory to analyze the statistical generalization gap of GNNs operating on graphs constructed on sampled points from manifolds. We study the generalization gaps of GNNs on both node-level and graph-level tasks. We show that the generalization gaps decrease with the number of nodes in the training graphs, which guarantees the generalization of GNNs to unseen points over manifolds. We validate our theoretical results in multiple real-world datasets.
翻译:卷积神经网络已成功扩展至图结构数据,形成了图神经网络(GNNs)。GNN通过连续应用图卷积运算聚合相邻节点的信息。虽然GNN已在多种学习任务中成功应用,但其泛化能力的理论理解仍在发展。本文借助流形理论,分析了在流形采样点所构建图上运行的GNN的统计泛化差距。我们研究了GNN在节点级和图级任务中的泛化差距。研究表明,泛化差距随训练图中节点数量的增加而减小,这保证了GNN在流形上对未见样本的泛化能力。我们在多个真实世界数据集中验证了理论结果。