This work provides a theoretical framework for assessing the generalization error of graph neural networks in the over-parameterized regime, where the number of parameters surpasses the quantity of data points. We explore two widely utilized types of graph neural networks: graph convolutional neural networks and message passing graph neural networks. Prior to this study, existing bounds on the generalization error in the over-parametrized regime were uninformative, limiting our understanding of over-parameterized network performance. Our novel approach involves deriving upper bounds within the mean-field regime for evaluating the generalization error of these graph neural networks. We establish upper bounds with a convergence rate of $O(1/n)$, where $n$ is the number of graph samples. These upper bounds offer a theoretical assurance of the networks' performance on unseen data in the challenging over-parameterized regime and overall contribute to our understanding of their performance.
翻译:本研究为评估过参数化机制下图神经网络的泛化误差提供了理论框架,其中参数数量超过数据点数量。我们探究了两种广泛使用的图神经网络:图卷积神经网络与消息传递图神经网络。在本研究之前,过参数化机制下现有的泛化误差边界缺乏信息量,限制了对过参数化网络性能的理解。我们的创新方法涉及在均值场机制下推导评估这些图神经网络泛化误差的上界。我们建立了收敛速率为$O(1/n)$的上界,其中$n$为图样本数量。这些上界为网络在具有挑战性的过参数化机制下对未见数据的性能提供了理论保证,并整体增进了我们对其性能的理解。