The increasing demand for transparent and reliable models, particularly in high-stakes decision-making areas such as medical image analysis, has led to the emergence of eXplainable Artificial Intelligence (XAI). Post-hoc XAI techniques, which aim to explain black-box models after training, have been controversial in recent works concerning their fidelity to the models' predictions. In contrast, Self-eXplainable AI (S-XAI) offers a compelling alternative by incorporating explainability directly into the training process of deep learning models. This approach allows models to generate inherent explanations that are closely aligned with their internal decision-making processes. Such enhanced transparency significantly supports the trustworthiness, robustness, and accountability of AI systems in real-world medical applications. To facilitate the development of S-XAI methods for medical image analysis, this survey presents an comprehensive review across various image modalities and clinical applications. It covers more than 200 papers from three key perspectives: 1) input explainability through the integration of explainable feature engineering and knowledge graph, 2) model explainability via attention-based learning, concept-based learning, and prototype-based learning, and 3) output explainability by providing counterfactual explanation and textual explanation. Additionally, this paper outlines the desired characteristics of explainability and existing evaluation methods for assessing explanation quality. Finally, it discusses the major challenges and future research directions in developing S-XAI for medical image analysis.
翻译:对透明可靠模型日益增长的需求,特别是在医学图像分析等高风险决策领域,推动了可解释人工智能(XAI)的出现。旨在训练后解释黑盒模型的事后XAI技术,在近期研究中因其对模型预测的忠实度而引发争议。相比之下,自解释人工智能(S-XAI)提供了一种引人注目的替代方案,它将可解释性直接融入深度学习模型的训练过程。这种方法使模型能够生成与其内部决策过程紧密一致的固有解释。这种增强的透明度极大地支持了AI系统在现实世界医疗应用中的可信度、鲁棒性和可问责性。为促进医学图像分析中S-XAI方法的发展,本综述对不同图像模态和临床应用进行了全面回顾。它从三个关键视角涵盖了200多篇论文:1)通过集成可解释特征工程和知识图谱实现输入可解释性,2)通过基于注意力的学习、基于概念的学习和基于原型的学习实现模型可解释性,以及3)通过提供反事实解释和文本解释实现输出可解释性。此外,本文概述了可解释性的理想特性以及评估解释质量的现有方法。最后,讨论了开发用于医学图像分析的S-XAI所面临的主要挑战和未来研究方向。