This article introduces PAGE, a parameterized generative interpretive framework. PAGE is capable of providing faithful explanations for any graph neural network without necessitating prior knowledge or internal details. Specifically, we train the auto-encoder to generate explanatory substructures by designing appropriate training strategy. Due to the dimensionality reduction of features in the latent space of the auto-encoder, it becomes easier to extract causal features leading to the model's output, which can be easily employed to generate explanations. To accomplish this, we introduce an additional discriminator to capture the causality between latent causal features and the model's output. By designing appropriate optimization objectives, the well-trained discriminator can be employed to constrain the encoder in generating enhanced causal features. Finally, these features are mapped to substructures of the input graph through the decoder to serve as explanations. Compared to existing methods, PAGE operates at the sample scale rather than nodes or edges, eliminating the need for perturbation or encoding processes as seen in previous methods. Experimental results on both artificially synthesized and real-world datasets demonstrate that our approach not only exhibits the highest faithfulness and accuracy but also significantly outperforms baseline models in terms of efficiency.
翻译:本文介绍了一种参数化生成解释框架PAGE,该框架能够为任意图神经网络提供忠实解释,无需依赖先验知识或模型内部细节。具体而言,我们通过设计合适的训练策略训练自编码器生成解释性子结构。由于自编码器潜在空间中特征维度的降低,更易于提取导致模型输出的因果特征,这些特征可便捷地用于生成解释。为实现这一目标,我们引入额外判别器以捕捉潜在因果特征与模型输出之间的因果关系。通过设计恰当的优化目标,训练完备的判别器可用于约束编码器生成增强的因果特征。最终,这些特征通过解码器映射至输入图的子结构作为解释。与现有方法相比,PAGE在样本尺度而非节点或边尺度运行,无需先前方法中的扰动或编码过程。在人工合成与真实数据集上的实验结果表明,我们的方法不仅展现出最高的忠实度与准确性,在效率方面也显著优于基线模型。