Explanations of machine learning (ML) model predictions generated by Explainable AI (XAI) techniques such as SHAP are essential for people using ML outputs for decision-making. We explore the potential of Large Language Models (LLMs) to transform these explanations into human-readable, narrative formats that align with natural communication. We address two key research questions: (1) Can LLMs reliably transform traditional explanations into high-quality narratives? and (2) How can we effectively evaluate the quality of narrative explanations? To answer these questions, we introduce Explingo, which consists of two LLM-based subsystems, a Narrator and Grader. The Narrator takes in ML explanations and transforms them into natural-language descriptions. The Grader scores these narratives on a set of metrics including accuracy, completeness, fluency, and conciseness. Our experiments demonstrate that LLMs can generate high-quality narratives that achieve high scores across all metrics, particularly when guided by a small number of human-labeled and bootstrapped examples. We also identified areas that remain challenging, in particular for effectively scoring narratives in complex domains. The findings from this work have been integrated into an open-source tool that makes narrative explanations available for further applications.
翻译:机器学习(ML)模型预测的可解释性对于依赖ML输出进行决策的用户至关重要。诸如SHAP等可解释人工智能(XAI)技术生成的解释,通常以技术性较强的形式呈现。本研究探索了利用大型语言模型(LLMs)将这些解释转化为符合自然交流习惯、易于理解的自然语言叙述形式的潜力。我们重点探讨两个核心研究问题:(1)LLMs能否可靠地将传统解释转化为高质量叙述?(2)如何有效评估叙述性解释的质量?为回答这些问题,我们提出了Explingo框架,该框架包含两个基于LLM的子系统:叙述器(Narrator)与评分器(Grader)。叙述器接收ML解释并将其转化为自然语言描述;评分器则根据准确性、完整性、流畅性和简洁性等一系列指标对这些叙述进行评分。实验结果表明,LLMs能够生成高质量的叙述,在所有指标上均获得高分,尤其是在少量人工标注和自举示例的引导下。同时,我们也识别出仍具挑战性的领域,特别是在复杂领域中有效评估叙述质量方面。本研究的成果已集成至一个开源工具中,以便将叙述性解释应用于更广泛的场景。