Artificial Intelligence algorithms have now become pervasive in multiple high-stakes domains. However, their internal logic can be obscure to humans. Explainable Artificial Intelligence aims to design tools and techniques to illustrate the predictions of the so-called black-box algorithms. The Human-Computer Interaction community has long stressed the need for a more user-centered approach to Explainable AI. This approach can benefit from research in user interface, user experience, and visual analytics. This paper proposes a visual-based method to illustrate rules paired with feature importance. A user study with 15 participants was conducted comparing our visual method with the original output of the algorithm and textual representation to test its effectiveness with users.
翻译:人工智能算法现已广泛应用于多个高风险领域,但其内部逻辑对人类而言可能晦涩难懂。可解释人工智能旨在设计工具和技术来阐释所谓黑箱算法的预测结果。人机交互社区长期以来一直强调需要更以用户为中心的方法来发展可解释人工智能。这种方法可受益于用户界面、用户体验和可视化分析领域的研究成果。本文提出一种基于可视化方法,将规则与特征重要性相结合进行阐释。我们通过一项包含15名参与者的用户研究,将所提出的可视化方法与算法的原始输出及文本表示形式进行对比,以测试其面向用户的有效性。