Existing rule-based explanations for Graph Neural Networks (GNNs) provide global interpretability but often optimize and assess fidelity in an intermediate, uninterpretable concept space, overlooking grounding quality for end users in the final subgraph explanations. This gap yields explanations that may appear faithful yet be unreliable in practice. To this end, we propose LogicXGNN, a post-hoc framework that constructs logical rules over reliable predicates explicitly designed to capture the GNN's message-passing structure, thereby ensuring effective grounding. We further introduce data-grounded fidelity ($\textit{Fid}_{\mathcal{D}}$), a realistic metric that evaluates explanations in their final-graph form, along with complementary utility metrics such as coverage and validity. Across extensive experiments, LogicXGNN improves $\textit{Fid}_{\mathcal{D}}$ by over 20% on average relative to state-of-the-art methods while being 10-100 $\times$ faster. With strong scalability and utility performance, LogicXGNN produces explanations that are faithful to the model's logic and reliably grounded in observable data. Our code is available at https://github.com/allengeng123/LogicXGNN/.
翻译:现有的基于规则的图神经网络(GNN)解释方法虽能提供全局可解释性,但通常在一种中间、不可解释的概念空间中优化和评估保真度,忽略了最终子图解释对于终端用户的落地质量。这一差距导致生成的解释可能看似忠实,但在实践中并不可靠。为此,我们提出 LogicXGNN,一种事后解释框架,它在明确设计用于捕捉 GNN 消息传递结构的可靠谓词上构建逻辑规则,从而确保有效的落地。我们进一步提出了数据落地保真度($\textit{Fid}_{\mathcal{D}}$),这是一个在最终图形式上评估解释的现实度量指标,并辅以覆盖率和有效性等补充效用指标。在大量实验中,LogicXGNN 相对于最先进方法平均将 $\textit{Fid}_{\mathcal{D}}$ 提升了 20% 以上,同时速度提高了 10-100 倍。凭借强大的可扩展性和效用表现,LogicXGNN 生成的解释既忠实于模型的逻辑,又能可靠地落地于可观测数据。我们的代码可在 https://github.com/allengeng123/LogicXGNN/ 获取。