Spectral graph neural networks learn graph filters, but their behavior with increasing depth and polynomial order is not well understood. We analyze these models in the graph Fourier domain, where each layer becomes an element-wise frequency update, separating the fixed spectrum from trainable parameters and making depth and order explicit. In this setting, we show that Gaussian complexity is invariant under the Graph Fourier Transform, which allows us to derive data-dependent, depth, and order-aware generalization bounds together with stability estimates. In the linear case, our bounds are tighter, and on real graphs, the data-dependent term correlates with the generalization gap across polynomial bases, highlighting practical choices that avoid frequency amplification across layers.
翻译:谱图神经网络学习图滤波器,但其在深度和多项式阶数增加时的行为尚未被充分理解。我们在图傅里叶域中分析这些模型,在此域中每一层变为逐元素的频率更新,将固定频谱与可训练参数分离,并使深度与阶数显式化。基于这一设定,我们证明高斯复杂度在图傅里叶变换下保持不变,从而推导出依赖于数据、深度和阶数的泛化界及稳定性估计。在线性情形下,我们的界更为紧凑;在真实图上,数据依赖性项与跨多项式基的泛化差距相关,突显了避免各层间频率放大的实际选择。