A core assumption of Explainable AI (XAI) is that explanations are useful to users -- that is, users will do something with the explanations. Prior work, however, does not clearly connect the information provided in explanations to user actions to evaluate effectiveness. In this paper, we articulate this connection. We conducted a formative study through 14 interviews with end users in education and medicine. We contribute a catalog of information and associated actions. Our catalog maps 12 categories of information that participants described relying on to take 60 different actions. We show how AI Creators can use the catalog's specificity and breadth to articulate how they expect information in their explanations to lead to user actions and test their assumptions. We use an exemplar XAI system to illustrate this approach. We conclude by discussing how our catalog expands the design space for XAI systems to support actionability.
翻译:可解释人工智能(XAI)的一个核心假设是:解释对用户有用——即用户会基于解释采取行动。然而,先前的研究并未明确地将解释中提供的信息与用户行为联系起来以评估其有效性。本文旨在阐明这一关联。我们通过对教育和医疗领域终端用户进行14次访谈,开展了一项形成性研究。我们贡献了一个信息及其对应行动的目录。该目录映射了参与者描述的12类信息,这些信息被用于触发60种不同的行动。我们展示了AI开发者如何利用该目录的精细度与广度,来阐明他们预期解释中的信息如何引导用户行为,并检验其假设。我们通过一个示例性XAI系统演示了该方法。最后,我们讨论了本目录如何拓展XAI系统的设计空间以支持可操作性。