Graph Neural Networks (GNNs) have emerged as powerful representation learning tools for capturing complex dependencies within diverse graph-structured data. Despite their success in a wide range of graph mining tasks, GNNs have raised serious concerns regarding their trustworthiness, including susceptibility to distribution shift, biases towards certain populations, and lack of explainability. Recently, integrating causal learning techniques into GNNs has sparked numerous ground-breaking studies since many GNN trustworthiness issues can be alleviated by capturing the underlying data causality rather than superficial correlations. In this survey, we comprehensively review recent research efforts on Causality-Inspired GNNs (CIGNNs). Specifically, we first employ causal tools to analyze the primary trustworthiness risks of existing GNNs, underscoring the necessity for GNNs to comprehend the causal mechanisms within graph data. Moreover, we introduce a taxonomy of CIGNNs based on the type of causal learning capability they are equipped with, i.e., causal reasoning and causal representation learning. Besides, we systematically introduce typical methods within each category and discuss how they mitigate trustworthiness risks. Finally, we summarize useful resources and discuss several future directions, hoping to shed light on new research opportunities in this emerging field. The representative papers, along with open-source data and codes, are available in https://github.com/usail-hkust/Causality-Inspired-GNNs.
翻译:图神经网络(GNNs)已成为捕捉多样化图结构数据中复杂依赖关系的强大表示学习工具。尽管其在广泛的图挖掘任务中取得了成功,GNNs在可信度方面引发了严重关切,包括对分布偏移的敏感性、对特定群体的偏见以及可解释性的缺乏。近年来,将因果学习技术融入GNNs催生了众多突破性研究,因为许多GNN可信度问题可以通过捕捉底层数据因果性而非表面相关性来缓解。本综述全面回顾了关于因果启发的图神经网络(CIGNNs)的最新研究进展。具体而言,我们首先运用因果工具分析现有GNNs的主要可信度风险,强调GNNs理解图数据内部因果机制的必要性。此外,我们根据CIGNNs所具备的因果学习能力类型——即因果推理与因果表示学习——提出了一个分类体系。同时,我们系统介绍了各类别中的典型方法,并讨论它们如何缓解可信度风险。最后,我们总结了实用资源并探讨了若干未来方向,期望为这一新兴领域的研究机遇提供新的启示。代表性论文及相关开源数据与代码可在https://github.com/usail-hkust/Causality-Inspired-GNNs获取。