The field of Explainable Artificial Intelligence (XAI) aims to improve the interpretability of black-box machine learning models. Building a heatmap based on the importance value of input features is a popular method for explaining the underlying functions of such models in producing their predictions. Heatmaps are almost understandable to humans, yet they are not without flaws. Non-expert users, for example, may not fully understand the logic of heatmaps (the logic in which relevant pixels to the model's prediction are highlighted with different intensities or colors). Additionally, objects and regions of the input image that are relevant to the model prediction are frequently not entirely differentiated by heatmaps. In this paper, we propose a framework called TbExplain that employs XAI techniques and a pre-trained object detector to present text-based explanations of scene classification models. Moreover, TbExplain incorporates a novel method to correct predictions and textually explain them based on the statistics of objects in the input image when the initial prediction is unreliable. To assess the trustworthiness and validity of the text-based explanations, we conducted a qualitative experiment, and the findings indicated that these explanations are sufficiently reliable. Furthermore, our quantitative and qualitative experiments on TbExplain with scene classification datasets reveal an improvement in classification accuracy over ResNet variants.
翻译:可解释人工智能(XAI)领域致力于提升黑盒机器学习模型的可解释性。基于输入特征重要性值构建热力图,是解释此类模型产生预测的内在机制的常用方法。热力图对人类而言基本可理解,但仍存在不足。例如,非专业用户可能无法完全理解热图的内在逻辑(即通过不同强度或颜色高亮显示与模型预测相关像素的逻辑)。此外,热力图往往无法完全区分输入图像中与模型预测相关的物体和区域。本文提出一个名为 TbExplain 的框架,该框架利用 XAI 技术与预训练物体检测器,为场景分类模型提供基于文本的解释。此外,当初始预测不可靠时,TbExplain 引入了一种新颖的方法,基于输入图像中物体的统计信息来校正预测并以文本形式解释校正结果。为评估基于文本的解释的可信度与有效性,我们进行了定性实验,结果表明这些解释具有足够的可靠性。此外,我们在场景分类数据集上对 TbExplain 进行的定量与定性实验显示,其分类准确率相较于 ResNet 变体有所提升。