Explanations provided by Self-explainable Graph Neural Networks (SE-GNNs) are fundamental for understanding the model's inner workings and for identifying potential misuse of sensitive attributes. Although recent works have highlighted that these explanations can be suboptimal and potentially misleading, a characterization of their failure cases is unavailable. In this work, we identify a critical failure of SE-GNN explanations: explanations can be unambiguously unrelated to how the SE-GNNs infer labels. We show that, on the one hand, many SE-GNNs can achieve optimal true risk while producing these degenerate explanations, and on the other, most faithfulness metrics can fail to identify these failure modes. Our empirical analysis reveals that degenerate explanations can be maliciously planted (allowing an attacker to hide the use of sensitive attributes) and can also emerge naturally, highlighting the need for reliable auditing. To address this, we introduce a novel faithfulness metric that reliably marks degenerate explanations as unfaithful, in both malicious and natural settings. Our code is available in the supplemental.
翻译:自解释图神经网络(SE-GNNs)提供的解释对于理解模型内部机制及识别敏感属性的潜在误用至关重要。尽管近期研究指出此类解释可能并非最优且存在误导性,但目前尚缺乏对其失效案例的系统性描述。本研究发现SE-GNN解释存在一个关键缺陷:解释可能明确与SE-GNN的标签推断机制无关。我们证明,一方面许多SE-GNN在产生此类退化解释时仍能实现最优真实风险;另一方面,大多数忠实性评估指标难以识别这些失效模式。实证分析表明,退化解释既可能被恶意植入(使攻击者得以隐藏敏感属性的使用),也可能自然产生,这凸显了可靠审计的必要性。为此,我们提出一种新颖的忠实性度量方法,在恶意和自然场景下均能可靠地将退化解释标记为不忠实。相关代码已附于补充材料中。