Artificial Intelligence (AI) is one of the major technological advancements of this century, bearing incredible potential for users through AI-powered applications and tools in numerous domains. Being often black-box (i.e., its decision-making process is unintelligible), developers typically resort to eXplainable Artificial Intelligence (XAI) techniques to interpret the behaviour of AI models to produce systems that are transparent, fair, reliable, and trustworthy. However, presenting explanations to the user is not trivial and is often left as a secondary aspect of the system's design process, leading to AI systems that are not useful to end-users. This paper presents a Systematic Literature Review on Explanation User Interfaces (XUIs) to gain a deeper understanding of the solutions and design guidelines employed in the academic literature to effectively present explanations to users. To improve the contribution and real-world impact of this survey, we also present a platform to support Human-cEnteRed developMent of Explainable user interfaceS (HERMES) and guide practitioners and scholars in the design and evaluation of XUIs.
翻译:人工智能(AI)是本世纪重大技术进步之一,通过众多领域中AI驱动的应用和工具为用户带来了巨大潜力。由于AI系统通常是黑箱(即其决策过程难以理解),开发者通常采用可解释人工智能(XAI)技术来解读AI模型的行为,以构建透明、公平、可靠且可信的系统。然而,向用户呈现解释并非易事,且常被作为系统设计过程中的次要环节,导致最终开发的AI系统对终端用户缺乏实用性。本文对解释性用户界面(XUI)进行了系统性文献综述,旨在深入理解学术文献中用于向用户有效呈现解释的解决方案与设计准则。为提升本综述的学术贡献与现实影响力,我们同时提出了一个支持以人为中心的解释性用户界面开发平台(HERMES),以指导从业者与学者进行XUI的设计与评估工作。