Graph Neural Networks (GNNs) have emerged as the predominant paradigm for learning from graph-structured data, offering a wide range of applications from social network analysis to bioinformatics. Despite their versatility, GNNs face challenges such as lack of generalization and poor interpretability, which hinder their wider adoption and reliability in critical applications. Dropping has emerged as an effective paradigm for improving the generalization capabilities of GNNs. However, existing approaches often rely on random or heuristic-based selection criteria, lacking a principled method to identify and exclude nodes that contribute to noise and over-complexity in the model. In this work, we argue that explainability should be a key indicator of a model's quality throughout its training phase. To this end, we introduce xAI-Drop, a novel topological-level dropping regularizer that leverages explainability to pinpoint noisy network elements to be excluded from the GNN propagation mechanism. An empirical evaluation on diverse real-world datasets demonstrates that our method outperforms current state-of-the-art dropping approaches in accuracy, and improves explanation quality.
翻译:图神经网络已成为从图结构数据中学习的主流范式,其应用范围涵盖从社交网络分析到生物信息学等多个领域。尽管图神经网络具有多功能性,但其仍面临泛化能力不足和可解释性差等挑战,这阻碍了其在关键应用中的广泛采用和可靠性。丢弃已成为提升图神经网络泛化能力的一种有效范式。然而,现有方法通常依赖于随机或基于启发式的选择标准,缺乏一种原则性的方法来识别并排除那些导致模型噪声和过度复杂性的节点。在本研究中,我们主张可解释性应作为模型在整个训练阶段质量的一个关键指标。为此,我们提出了xAI-Drop,一种新颖的拓扑级丢弃正则化器,它利用可解释性来精确定位应从图神经网络传播机制中排除的噪声网络元素。在多种真实世界数据集上的实证评估表明,我们的方法在准确性上优于当前最先进的丢弃方法,并提升了解释质量。