Graph Neural Networks (GNNs) are a powerful technique for machine learning on graph-structured data, yet they pose interpretability challenges, especially for non-expert users. Existing GNN explanation methods often yield technical outputs such as subgraphs and feature importance scores, which are not easily understood. Building on recent insights from social science and other Explainable AI (XAI) methods, we propose GraphXAIN, a natural language narrative that explains individual predictions made by GNNs. We present a model-agnostic and explainer-agnostic XAI approach that complements graph explainers by generating GraphXAINs, using Large Language Models (LLMs) and integrating graph data, individual predictions from GNNs, explanatory subgraphs, and feature importances. We define XAI Narratives and XAI Descriptions, highlighting their distinctions and emphasizing the importance of narrative principles in effective explanations. By incorporating natural language narratives, our approach supports graph practitioners and non-expert users, aligning with social science research on explainability and enhancing user understanding and trust in complex GNN models. We demonstrate GraphXAIN's capabilities on a real-world graph dataset, illustrating how its generated narratives can aid understanding compared to traditional graph explainer outputs or other descriptive explanation methods.
翻译:图神经网络(GNNs)是一种处理图结构数据的强大机器学习技术,但其可解释性面临挑战,尤其对非专业用户而言。现有的GNN解释方法通常产生技术性输出(如子图和特征重要性分数),这些结果不易被理解。基于社会科学及其他可解释人工智能(XAI)方法的最新见解,我们提出GraphXAIN——一种用于解释GNN个体预测的自然语言叙事框架。我们提出一种与模型及解释器无关的XAI方法,通过结合图数据、GNN的个体预测、解释性子图及特征重要性,并利用大语言模型(LLMs)生成GraphXAIN叙述,从而对图解释器形成补充。我们定义了XAI叙事与XAI描述,阐明二者的区别,并强调叙事原则在有效解释中的重要性。通过引入自然语言叙事,我们的方法既支持图领域从业者,也惠及非专业用户,符合社会科学关于可解释性的研究,并增强用户对复杂GNN模型的理解与信任。我们在真实图数据集上展示了GraphXAIN的能力,通过对比传统图解释器输出或其他描述性解释方法,说明其生成的叙事如何促进理解。