In recent years, black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in Remote Sensing. Despite the potential benefits of uncovering the inner workings of these models with explainable AI, a comprehensive overview summarizing the used explainable AI methods and their objectives, findings, and challenges in Remote Sensing applications is still missing. In this paper, we address this issue by performing a systematic review to identify the key trends of how explainable AI is used in Remote Sensing and shed light on novel explainable AI approaches and emerging directions that tackle specific Remote Sensing challenges. We also reveal the common patterns of explanation interpretation, discuss the extracted scientific insights in Remote Sensing, and reflect on the approaches used for explainable AI methods evaluation. Our review provides a complete summary of the state-of-the-art in the field. Further, we give a detailed outlook on the challenges and promising research directions, representing a basis for novel methodological development and a useful starting point for new researchers in the field of explainable AI in Remote Sensing.
翻译:近年来,黑箱机器学习方法已成为遥感知识提取的主流建模范式。尽管通过可解释人工智能揭示这些模型内部工作原理具有潜在优势,但目前仍缺乏一份全面综述,系统总结遥感应用中使用的可解释人工智能方法及其目标、发现与挑战。本文通过系统性综述,识别可解释人工智能在遥感中的关键应用趋势,阐明应对特定遥感挑战的新型可解释人工智能方法与新兴方向。同时,我们揭示了解释解读的常见模式,讨论了遥感领域提取的科学洞见,并反思了可解释人工智能方法的评估方式。本综述提供了该领域最新研究进展的完整总结,进一步展望了面临的挑战与具有前景的研究方向,为方法论创新奠定基础,并为遥感可解释人工智能领域的新研究者提供实用起点。