Graph convolutional networks (GCNs) have emerged as powerful models for graph learning tasks, exhibiting promising performance in various domains. While their empirical success is evident, there is a growing need to understand their essential ability from a theoretical perspective. Existing theoretical research has primarily focused on the analysis of single-layer GCNs, while a comprehensive theoretical exploration of the stability and generalization of deep GCNs remains limited. In this paper, we bridge this gap by delving into the stability and generalization properties of deep GCNs, aiming to provide valuable insights by characterizing rigorously the associated upper bounds. Our theoretical results reveal that the stability and generalization of deep GCNs are influenced by certain key factors, such as the maximum absolute eigenvalue of the graph filter operators and the depth of the network. Our theoretical studies contribute to a deeper understanding of the stability and generalization properties of deep GCNs, potentially paving the way for developing more reliable and well-performing models.
翻译:图卷积网络(GCNs)已成为图学习任务中的强大模型,在多个领域展现出优异的性能。尽管其经验性成功显而易见,但从理论角度理解其本质能力的需求日益增长。现有理论研究主要集中于单层GCN的分析,而对深度GCN稳定性与泛化性的系统性理论探索仍显不足。本文通过深入探究深度GCN的稳定性与泛化特性来弥合这一空白,旨在通过严格刻画相关上界提供有价值的理论见解。我们的理论结果表明,深度GCN的稳定性与泛化性受到若干关键因素的影响,如图滤波器算子的最大绝对特征值和网络深度。本理论研究有助于深化对深度GCN稳定性与泛化特性的理解,或可为开发更可靠、性能更优的模型奠定基础。