We present a logic based interpretable model for learning on graphs and an algorithm to distill this model from a Graph Neural Network (GNN). Recent results have shown connections between the expressivity of GNNs and the two-variable fragment of first-order logic with counting quantifiers (C2). We introduce a decision-tree based model which leverages an extension of C2 to distill interpretable logical classifiers from GNNs. We test our approach on multiple GNN architectures. The distilled models are interpretable, succinct, and attain similar accuracy to the underlying GNN. Furthermore, when the ground truth is expressible in C2, our approach outperforms the GNN.
翻译:本文提出了一种基于逻辑的图学习可解释模型,以及从图神经网络中蒸馏该模型的算法。近期研究揭示了图神经网络的表达能力与带计数量词的二变量一阶逻辑片段之间的关联。我们引入一种基于决策树的模型,该模型利用C2逻辑的扩展形式,从图神经网络中蒸馏出可解释的逻辑分类器。我们在多种图神经网络架构上测试了该方法。蒸馏得到的模型具有可解释性、结构简洁的特点,且能达到与底层图神经网络相近的准确率。更重要的是,当真实情况可用C2逻辑表达时,我们的方法能够超越原始图神经网络的性能。