Graph Neural Networks (GNNs) are a powerful technique for machine learning on graph-structured data, yet they pose challenges in interpretability. Existing GNN explanation methods usually yield technical outputs, such as subgraphs and feature importance scores, that are difficult for non-data scientists to understand and thereby violate the purpose of explanations. Motivated by recent Explainable AI (XAI) research, we propose GraphXAIN, a method that generates natural language narratives explaining GNN predictions. GraphXAIN is a model- and explainer-agnostic method that uses Large Language Models (LLMs) to translate explanatory subgraphs and feature importance scores into coherent, story-like explanations of GNN decision-making processes. Evaluations on real-world datasets demonstrate GraphXAIN's ability to improve graph explanations. A survey of machine learning researchers and practitioners reveals that GraphXAIN enhances four explainability dimensions: understandability, satisfaction, convincingness, and suitability for communicating model predictions. When combined with another graph explainer method, GraphXAIN further improves trustworthiness, insightfulness, confidence, and usability. Notably, 95% of participants found GraphXAIN to be a valuable addition to the GNN explanation method. By incorporating natural language narratives, our approach serves both graph practitioners and non-expert users by providing clearer and more effective explanations.
翻译:图神经网络(GNN)是一种用于图结构数据的强大机器学习技术,但其可解释性面临挑战。现有的GNN解释方法通常产生技术性输出(如子图和特征重要性分数),非数据科学家难以理解,从而违背了解释的初衷。受近期可解释人工智能(XAI)研究的启发,我们提出GraphXAIN方法,该方法能生成解释GNN预测的自然语言叙述。GraphXAIN是一种与模型和解释器无关的方法,它利用大型语言模型(LLM)将解释性子图和特征重要性分数转化为连贯的、故事般的GNN决策过程解释。在真实数据集上的评估证明了GraphXAIN改进图解释的能力。对机器学习研究人员和实践者的调查显示,GraphXAIN提升了四个可解释性维度:可理解性、满意度、说服力以及适用于模型预测传达的程度。当与其他图解释方法结合时,GraphXAIN进一步提高了可信度、洞察性、置信度和可用性。值得注意的是,95%的参与者认为GraphXAIN是GNN解释方法的有价值补充。通过融入自然语言叙述,我们的方法为图领域从业者和非专业用户提供了更清晰、更有效的解释。