Artificial Intelligence (AI) is one of the major technological advancements of this century, bearing incredible potential for users through AI-powered applications and tools in numerous domains. Being often black-box (i.e., its decision-making process is unintelligible), developers typically resort to eXplainable Artificial Intelligence (XAI) techniques to interpret the behaviour of AI models to produce systems that are transparent, fair, reliable, and trustworthy. However, presenting explanations to the user is not trivial and is often left as a secondary aspect of the system's design process, leading to AI systems that are not useful to end-users. This paper presents a Systematic Literature Review on Explanation User Interfaces (XUIs) to gain a deeper understanding of the solutions and design guidelines employed in the academic literature to effectively present explanations to users. To improve the contribution and real-world impact of this survey, we also present a platform to support Human-cEnteRed developMent of Explainable user interfaceS (HERMES) and guide practitioners and scholars in the design and evaluation of XUIs.
翻译:人工智能(AI)是本世纪重大的技术进步之一,通过众多领域中基于AI的应用程序和工具,为用户带来了巨大的潜力。由于AI通常是黑箱(即其决策过程难以理解),开发者通常采用可解释人工智能(XAI)技术来解释AI模型的行为,以构建透明、公平、可靠和可信的系统。然而,向用户呈现解释并非易事,且常被作为系统设计过程中的次要方面,导致最终开发的AI系统对终端用户并无实际效用。本文对解释用户界面(XUI)进行了系统性文献综述,旨在深入理解学术文献中用于有效向用户呈现解释的解决方案与设计准则。为提升本综述的贡献与实际影响,我们还提出了一个支持以人为中心的可解释用户界面开发平台(HERMES),以指导从业者和学者进行XUI的设计与评估。