Graph neural networks (GNNs) have achieved remarkable empirical success in processing and representing graph-structured data across various domains. However, a significant challenge known as "oversmoothing" persists, where vertex features become nearly indistinguishable in deep GNNs, severely restricting their expressive power and practical utility. In this work, we analyze the asymptotic oversmoothing rates of deep GNNs with and without residual connections by deriving explicit convergence rates for a normalized vertex similarity measure. Our analytical framework is grounded in the multiplicative ergodic theorem. Furthermore, we demonstrate that adding residual connections effectively mitigates or prevents oversmoothing across several broad families of parameter distributions. The theoretical findings are strongly supported by numerical experiments.
翻译:图神经网络(GNNs)在处理和表示跨领域的图结构数据方面取得了显著的实证成功。然而,一个被称为“过度平滑”的重大挑战持续存在,即在深度GNNs中,顶点特征变得几乎无法区分,严重限制了其表达能力和实际效用。在这项工作中,我们通过推导归一化顶点相似度度量的显式收敛速率,分析了带有残差连接和不带残差连接的深度GNNs的渐近过度平滑速率。我们的分析框架基于乘法遍历定理。此外,我们证明了添加残差连接能有效缓解或防止多种广泛参数分布族中的过度平滑现象。数值实验有力地支持了这些理论发现。