There is a recent trend to leverage the power of graph neural networks (GNNs) for brain-network based psychiatric diagnosis, which,in turn, also motivates an urgent need for psychiatrists to fully understand the decision behavior of the used GNNs. However, most of the existing GNN explainers are either post-hoc in which another interpretive model needs to be created to explain a well-trained GNN, or do not consider the causal relationship between the extracted explanation and the decision, such that the explanation itself contains spurious correlations and suffers from weak faithfulness. In this work, we propose a granger causality-inspired graph neural network (CI-GNN), a built-in interpretable model that is able to identify the most influential subgraph (i.e., functional connectivity within brain regions) that is causally related to the decision (e.g., major depressive disorder patients or healthy controls), without the training of an auxillary interpretive network. CI-GNN learns disentangled subgraph-level representations {\alpha} and \b{eta} that encode, respectively, the causal and noncausal aspects of original graph under a graph variational autoencoder framework, regularized by a conditional mutual information (CMI) constraint. We theoretically justify the validity of the CMI regulation in capturing the causal relationship. We also empirically evaluate the performance of CI-GNN against three baseline GNNs and four state-of-the-art GNN explainers on synthetic data and three large-scale brain disease datasets. We observe that CI-GNN achieves the best performance in a wide range of metrics and provides more reliable and concise explanations which have clinical evidence.The source code and implementation details of CI-GNN are freely available at GitHub repository (https://github.com/ZKZ-Brain/CI-GNN/).
翻译:近年来,利用图神经网络进行脑网络精神疾病诊断成为研究趋势,这促使精神科医生迫切需要理解所用图神经网络的决策行为。然而,现有图神经网络解释方法大多属于事后机制,需要构建额外解释模型来解读已训练好的图神经网络,或者未考虑提取解释与决策间的因果关系,导致解释本身存在虚假相关性且忠实性不足。本文提出基于格兰杰因果关系的图神经网络(CI-GNN),这是一种内置可解释模型,无需训练辅助解释网络即可识别与决策(如抑郁症患者或健康对照)存在因果关联的最具影响力子图(即脑区间功能连接)。CI-GNN在图变分自编码器框架下学习解耦的子图层级表示α和β,分别编码原始图的因果与非因果特征,并通过条件互信息约束进行正则化。我们从理论上论证了条件互信息正则化在捕捉因果关系中的有效性,并在合成数据与三个大规模脑疾病数据集上,将CI-GNN与三个基线图神经网络及四个最新图神经网络解释器进行实验评估。结果表明CI-GNN在多项指标中均取得最优性能,可提供具有临床证据支持、更可靠且简洁的解释。CI-GNN的源代码与实现细节已开源至GitHub仓库(https://github.com/ZKZ-Brain/CI-GNN/)。