Explanations provided by Self-explainable Graph Neural Networks (SE-GNNs) are fundamental for understanding the model's inner workings and for identifying potential misuse of sensitive attributes. Although recent works have highlighted that these explanations can be suboptimal and potentially misleading, a characterization of their failure cases is unavailable. In this work, we identify a critical failure of SE-GNN explanations: explanations can be unambiguously unrelated to how the SE-GNNs infer labels. We show that, on the one hand, many SE-GNNs can achieve optimal true risk while producing these degenerate explanations, and on the other, most faithfulness metrics can fail to identify these failure modes. Our empirical analysis reveals that degenerate explanations can be maliciously planted (allowing an attacker to hide the use of sensitive attributes) and can also emerge naturally, highlighting the need for reliable auditing. To address this, we introduce a novel faithfulness metric that reliably marks degenerate explanations as unfaithful, in both malicious and natural settings. Our code is available in the supplemental.
翻译:自解释图神经网络(SE-GNNs)提供的解释对于理解模型内部机制及识别敏感属性的潜在误用至关重要。尽管近期研究指出这类解释可能并非最优且存在误导性,但目前尚缺乏对其失效案例的系统性描述。本研究发现SE-GNN解释存在一个关键缺陷:其解释可能与SE-GNN推断标签的实际机制毫无关联。我们证明,一方面许多SE-GNN在产生这些退化解释时仍能达到最优真实风险;另一方面,大多数忠实性评估指标无法识别此类失效模式。实证分析表明,退化解释既可能被恶意植入(使攻击者得以隐藏敏感属性的使用),也可能自然产生,这凸显了可靠审计的必要性。为此,我们提出一种新颖的忠实性度量标准,能在恶意与自然场景下均能可靠地将退化解释标记为不忠实。相关代码已附于补充材料中。