Graph Neural Networks (GNNs) are powerful at solving graph classification tasks, yet applied problems often contain noisy labels. In this work, we study GNN robustness to label noise, demonstrate GNN failure modes when models struggle to generalise on low-order graphs, low label coverage, or when a model is over-parameterized. We establish both empirical and theoretical links between GNN robustness and the reduction of the total Dirichlet Energy of learned node representations, which encapsulates the hypothesized GNN smoothness inductive bias. Finally, we introduce two training strategies to enhance GNN robustness: (1) by incorporating a novel inductive bias in the weight matrices through the removal of negative eigenvalues, connected to Dirichlet Energy minimization; (2) by extending to GNNs a loss penalty that promotes learned smoothness. Importantly, neither approach negatively impacts performance in noise-free settings, supporting our hypothesis that the source of GNNs robustness is their smoothness inductive bias.
翻译:图神经网络(GNNs)在解决图分类任务方面表现出强大能力,然而实际应用问题常包含噪声标签。本研究探讨了GNN对标签噪声的鲁棒性,揭示了模型在低阶图、低标签覆盖率或过参数化情况下难以泛化时的失效模式。我们通过学习的节点表示的总狄利克雷能量(该指标体现了假设的GNN平滑性归纳偏置)建立了GNN鲁棒性与能量降低之间的实证与理论联系。最后,我们提出两种增强GNN鲁棒性的训练策略:(1)通过去除负特征值在权重矩阵中引入新颖的归纳偏置,该方法与狄利克雷能量最小化相关联;(2)将促进学习平滑性的损失惩罚项扩展至GNNs。重要的是,两种方法在无噪声环境下均不会对性能产生负面影响,这支持了我们的假设:GNN鲁棒性的根源在于其平滑性归纳偏置。