Explainable Artificial Intelligence has become a crucial area of research, aiming to demystify the decision-making processes of deep learning models. Among various explainability techniques, counterfactual explanations have been proven particularly promising, as they offer insights into model behavior by highlighting minimal changes that would alter a prediction. Despite their potential, these explanations are often complex and technical, making them difficult for non-experts to interpret. To address this challenge, we propose a novel pipeline that leverages Language Models, large and small, to compose narratives for counterfactual explanations. We employ knowledge distillation techniques along with a refining mechanism to enable Small Language Models to perform comparably to their larger counterparts while maintaining robust reasoning abilities. In addition, we introduce a simple but effective evaluation method to assess natural language narratives, designed to verify whether the models' responses are in line with the factual, counterfactual ground truth. As a result, our proposed pipeline enhances both the reasoning capabilities and practical performance of student models, making them more suitable for real-world use cases.
翻译:可解释人工智能已成为一个至关重要的研究领域,旨在揭示深度学习模型决策过程的神秘面纱。在各种可解释性技术中,反事实解释已被证明尤其具有前景,因为它们通过强调能够改变预测的最小变化,为模型行为提供了深入见解。尽管具有潜力,但这些解释往往复杂且技术性强,使得非专业人士难以理解。为应对这一挑战,我们提出了一种新颖的流程,该流程利用不同规模的语言模型来构建反事实解释的叙事。我们采用知识蒸馏技术结合精炼机制,使小型语言模型能够达到与大型模型相当的性能,同时保持稳健的推理能力。此外,我们引入了一种简单而有效的评估方法来评估自然语言叙事,旨在验证模型的响应是否符合事实性与反事实性的基本真值。因此,我们提出的流程增强了学生模型的推理能力和实际性能,使其更适用于现实世界的应用场景。