Graph neural networks (GNNs) are powerful tools for conducting inference on graph data but are often seen as "black boxes" due to difficulty in extracting meaningful subnetworks driving predictive performance. Many interpretable GNN methods exist, but they cannot quantify uncertainty in edge weights and suffer in predictive accuracy when applied to challenging graph structures. In this work, we proposed BetaExplainer which addresses these issues by using a sparsity-inducing prior to mask unimportant edges during model training. To evaluate our approach, we examine various simulated data sets with diverse real-world characteristics. Not only does this implementation provide a notion of edge importance uncertainty, it also improves upon evaluation metrics for challenging datasets compared to state-of-the art explainer methods.
翻译:图神经网络(GNNs)是处理图数据推理的强大工具,但由于难以提取驱动预测性能的有意义子网络,常被视为“黑箱”。现有许多可解释GNN方法无法量化边权重的不确定性,且在应用于复杂图结构时预测准确性下降。本研究提出BetaExplainer方法,通过在模型训练中使用稀疏诱导先验来屏蔽不重要边,从而解决上述问题。为评估该方法,我们考察了多种具有不同现实世界特征的模拟数据集。该实现不仅提供了边重要性不确定性的度量,相比现有最优解释方法,在复杂数据集上的评估指标也获得提升。