Graph Neural Networks (GNNs) learn node representations through iterative network-based message-passing. While powerful, deep GNNs suffer from oversmoothing, where node features converge to a homogeneous, non-informative state. We re-frame this problem of representational collapse from a \emph{bifurcation theory} perspective, characterizing oversmoothing as convergence to a stable ``homogeneous fixed point.'' Our central contribution is the theoretical discovery that this undesired stability can be broken by replacing standard monotone activations (e.g., ReLU) with a class of functions. Using Lyapunov-Schmidt reduction, we analytically prove that this substitution induces a bifurcation that destabilizes the homogeneous state and creates a new pair of stable, non-homogeneous \emph{patterns} that provably resist oversmoothing. Our theory predicts a precise, nontrivial scaling law for the amplitude of these emergent patterns, which we quantitatively validate in experiments. Finally, we demonstrate the practical utility of our theory by deriving a closed-form, bifurcation-aware initialization and showing its utility in real benchmark experiments.
翻译:图神经网络(GNN)通过基于网络的迭代消息传递学习节点表示。尽管功能强大,深层GNN却面临过平滑问题,即节点特征收敛至同质化、无信息的状态。我们从**分岔理论**的视角重新审视这一表示崩溃问题,将过平滑刻画为向稳定“同质不动点”的收敛过程。我们的核心理论贡献在于发现:通过将标准单调激活函数(如ReLU)替换为一类特定函数,可以打破这种不期望的稳定性。利用Lyapunov-Schmidt约化方法,我们解析地证明了这种替换会诱导分岔现象,从而破坏同质状态的稳定性,并产生一对稳定的非同质**模式**,这些模式在理论上可抵抗过平滑。我们的理论预测了这些涌现模式振幅的精确且非平凡的标度律,并通过实验进行了定量验证。最后,我们通过推导一个闭式、分岔感知的初始化方法,并在实际基准实验中展示其效用,证明了本理论的实用价值。