As one of popular quantitative metrics to assess the quality of explanation of graph neural networks (GNNs), fidelity measures the output difference after removing unimportant parts of the input graph. Fidelity has been widely used due to its straightforward interpretation that the underlying model should produce similar predictions when features deemed unimportant from the explanation are removed. This raises a natural question: "Does fidelity induce a global (soft) mask for graph pruning?" To solve this, we aim to explore the potential of the fidelity measure to be used for graph pruning, eventually enhancing the GNN models for better efficiency. To this end, we propose Fidelity$^-$-inspired Pruning (FiP), an effective framework to construct global edge masks from local explanations. Our empirical observations using 7 edge attribution methods demonstrate that, surprisingly, general eXplainable AI methods outperform methods tailored to GNNs in terms of graph pruning performance.
翻译:作为评估图神经网络(GNN)解释质量的常用量化指标之一,fidelity通过移除输入图中不重要部分后的输出差异来衡量解释效果。由于其直观的解释——当移除被解释为不重要的特征时,底层模型应产生相似的预测——fidelity已被广泛采用。这引发了一个自然问题:“fidelity能否为图剪枝诱导出全局(软)掩码?”为此,我们旨在探索fidelity度量用于图剪枝的潜力,最终提升GNN模型的效率。基于此,我们提出了受Fidelity$^-$启发的剪枝方法(FiP),这是一个从局部解释构建全局边掩码的有效框架。我们使用7种边归因方法的实证研究表明,令人惊讶的是,通用可解释人工智能方法在图剪枝性能方面优于针对GNN定制的方法。