Challenges persist in providing interpretable explanations for neural network reasoning in explainable AI (xAI). Existing methods like Integrated Gradients produce noisy maps, and LIME, while intuitive, may deviate from the model's reasoning. We introduce a framework that uses hierarchical segmentation techniques for faithful and interpretable explanations of Convolutional Neural Networks (CNNs). Our method constructs model-based hierarchical segmentations that maintain the model's reasoning fidelity and allows both human-centric and model-centric segmentation. This approach offers multiscale explanations, aiding bias identification and enhancing understanding of neural network decision-making. Experiments show that our framework, xAiTrees, delivers highly interpretable and faithful model explanations, not only surpassing traditional xAI methods but shedding new light on a novel approach to enhancing xAI interpretability.
翻译:在可解释人工智能(xAI)领域,为神经网络的推理过程提供可解释性说明仍面临诸多挑战。现有方法如积分梯度法生成的归因图存在噪声干扰,而LIME方法虽直观易懂,却可能偏离模型的实际推理逻辑。本文提出一种利用层次化分割技术为卷积神经网络(CNN)提供忠实且可解释的说明框架。该方法构建基于模型的层次化分割体系,在保持模型推理保真度的同时,支持以人为中心和以模型为中心的分割方式。该框架能提供多尺度解释,有助于识别模型偏差并深化对神经网络决策机制的理解。实验表明,我们提出的xAiTrees框架能够生成高度可解释且忠实于模型的说明,不仅超越了传统xAI方法,更为提升xAI可解释性开辟了新的研究路径。