Signed Graph Neural Networks (SGNNs) have been shown to be effective in analyzing complex patterns in real-world situations where positive and negative links coexist. However, SGNN models suffer from poor explainability, which limit their adoptions in critical scenarios that require understanding the rationale behind predictions. To the best of our knowledge, there is currently no research work on the explainability of the SGNN models. Our goal is to address the explainability of decision-making for the downstream task of link sign prediction specific to signed graph neural networks. Since post-hoc explanations are not derived directly from the models, they may be biased and misrepresent the true explanations. Therefore, in this paper we introduce a Self-Explainable Signed Graph transformer (SE-SGformer) framework, which can not only outputs explainable information while ensuring high prediction accuracy. Specifically, We propose a new Transformer architecture for signed graphs and theoretically demonstrate that using positional encoding based on signed random walks has greater expressive power than current SGNN methods and other positional encoding graph Transformer-based approaches. We constructs a novel explainable decision process by discovering the $K$-nearest (farthest) positive (negative) neighbors of a node to replace the neural network-based decoder for predicting edge signs. These $K$ positive (negative) neighbors represent crucial information about the formation of positive (negative) edges between nodes and thus can serve as important explanatory information in the decision-making process. We conducted experiments on several real-world datasets to validate the effectiveness of SE-SGformer, which outperforms the state-of-the-art methods by improving 2.2\% prediction accuracy and 73.1\% explainablity accuracy in the best-case scenario.
翻译:符号图神经网络(SGNNs)已被证明能有效分析现实世界中正负链接共存场景下的复杂模式。然而,SGNN模型存在可解释性不足的问题,这限制了其在需要理解预测依据的关键场景中的应用。据我们所知,目前尚无针对SGNN模型可解释性的研究工作。我们的目标是解决符号图神经网络在链接符号预测这一下游任务中决策过程的可解释性问题。由于事后解释并非直接源自模型,可能存在偏差且无法真实反映解释依据。因此,本文提出一种自解释符号图Transformer(SE-SGformer)框架,该框架在保证高预测精度的同时能够输出可解释信息。具体而言,我们提出一种面向符号图的新型Transformer架构,并从理论上证明:基于符号随机游走的位置编码方法比现有SGNN技术及其他基于位置编码的图Transformer方法具有更强的表达能力。我们通过发现节点的$K$个最近(最远)正(负)邻居来构建新颖的可解释决策过程,以此替代基于神经网络的边符号预测解码器。这些$K$个正(负)邻居反映了节点间形成正(负)边的重要信息,因而可作为决策过程中的关键解释依据。我们在多个真实数据集上进行了实验验证,结果表明SE-SGformer在最佳情况下以2.2%的预测精度提升和73.1%的解释精度提升优于现有最优方法。