This work provides a theoretical framework for assessing the generalization error of graph neural networks in the over-parameterized regime, where the number of parameters surpasses the quantity of data points. We explore two widely utilized types of graph neural networks: graph convolutional neural networks and message passing graph neural networks. Prior to this study, existing bounds on the generalization error in the over-parametrized regime were uninformative, limiting our understanding of over-parameterized network performance. Our novel approach involves deriving upper bounds within the mean-field regime for evaluating the generalization error of these graph neural networks. We establish upper bounds with a convergence rate of $O(1/n)$, where $n$ is the number of graph samples. These upper bounds offer a theoretical assurance of the networks' performance on unseen data in the challenging over-parameterized regime and overall contribute to our understanding of their performance.
翻译:本研究提出了一个理论框架,用于评估图神经网络在过参数化机制下的泛化误差,其中参数量超过数据点数量。我们探讨了两种广泛使用的图神经网络:图卷积神经网络与消息传递图神经网络。在本研究之前,过参数化机制下现有泛化误差界的描述性不足,限制了对过参数化网络性能的理解。我们的新方法涉及在平均场机制下推导评估这些图神经网络泛化误差的上界。我们建立了收敛率为$O(1/n)$的上界,其中$n$为图样本数量。这些上界为网络在具有挑战性的过参数化机制下对未见数据的性能提供了理论保证,并整体增进了我们对其性能的理解。