Recent advancements in Graph Neural Networks (GNNs) have spurred an upsurge of research dedicated to enhancing the explainability of GNNs, particularly in critical domains such as medicine. A promising approach is the self-explaining method, which outputs explanations along with predictions. However, existing self-explaining models require a large amount of training data, rendering them unavailable in few-shot scenarios. To address this challenge, in this paper, we propose a Meta-learned Self-Explaining GNN (MSE-GNN), a novel framework that generates explanations to support predictions in few-shot settings. MSE-GNN adopts a two-stage self-explaining structure, consisting of an explainer and a predictor. Specifically, the explainer first imitates the attention mechanism of humans to select the explanation subgraph, whereby attention is naturally paid to regions containing important characteristics. Subsequently, the predictor mimics the decision-making process, which makes predictions based on the generated explanation. Moreover, with a novel meta-training process and a designed mechanism that exploits task information, MSE-GNN can achieve remarkable performance on new few-shot tasks. Extensive experimental results on four datasets demonstrate that MSE-GNN can achieve superior performance on prediction tasks while generating high-quality explanations compared with existing methods. The code is publicly available at https://github.com/jypeng28/MSE-GNN.
翻译:近年来,图神经网络(GNNs)的快速发展推动了大量旨在增强GNN可解释性的研究,尤其是在医学等关键领域。自解释方法是一种前景广阔的技术路径,它能够在输出预测结果的同时提供解释。然而,现有的自解释模型需要大量训练数据,导致其在少样本场景中难以应用。为应对这一挑战,本文提出了一种元学习自解释图神经网络(MSE-GNN),这是一个能够在少样本设置下生成解释以支持预测的新颖框架。MSE-GNN采用包含解释器与预测器的两阶段自解释结构。具体而言,解释器首先模拟人类的注意力机制来选择解释子图,其注意力会自然地聚焦于包含重要特征的区域。随后,预测器模拟决策过程,基于生成的解释进行预测。此外,通过新颖的元训练过程以及一种利用任务信息的机制设计,MSE-GNN能够在新的少样本任务上取得显著性能。在四个数据集上的大量实验结果表明,与现有方法相比,MSE-GNN不仅能在预测任务上实现更优性能,同时能生成高质量的解释。代码已公开于https://github.com/jypeng28/MSE-GNN。