Graph Neural Networks (GNNs) have emerged as the predominant paradigm for learning from graph-structured data, offering a wide range of applications from social network analysis to bioinformatics. Despite their versatility, GNNs face challenges such as oversmoothing, lack of generalization and poor interpretability, which hinder their wider adoption and reliability in critical applications. Dropping has emerged as an effective paradigm for reducing noise during training and improving robustness of GNNs. However, existing approaches often rely on random or heuristic-based selection criteria, lacking a principled method to identify and exclude nodes that contribute to noise and over-complexity in the model. In this work, we argue that explainability should be a key indicator of a model's robustness throughout its training phase. To this end, we introduce xAI-Drop, a novel topological-level dropping regularizer that leverages explainability to pinpoint noisy network elements to be excluded from the GNN propagation mechanism. An empirical evaluation on diverse real-world datasets demonstrates that our method outperforms current state-of-the-art dropping approaches in accuracy, effectively reduces over-smoothing, and improves explanation quality.
翻译:图神经网络(GNNs)已成为从图结构数据中学习的主流范式,其应用范围广泛,从社交网络分析到生物信息学。尽管GNNs功能多样,但它们面临着诸如过度平滑、泛化能力不足以及可解释性差等挑战,这阻碍了其在关键应用中的更广泛采用和可靠性。丢弃已成为在训练期间减少噪声并提高GNNs鲁棒性的有效范式。然而,现有方法通常依赖于随机或基于启发式的选择标准,缺乏一种原则性的方法来识别和排除那些导致模型噪声和过度复杂性的节点。在本工作中,我们认为可解释性应作为模型在整个训练阶段鲁棒性的关键指标。为此,我们提出了xAI-Drop,一种新颖的拓扑级丢弃正则化器,它利用可解释性来精确定位应从GNN传播机制中排除的噪声网络元素。在多种真实世界数据集上的实证评估表明,我们的方法在准确性上优于当前最先进的丢弃方法,有效减少了过度平滑,并提高了解释质量。