In numerous high-stakes domains, training novices via conventional learning systems does not suffice. To impart tacit knowledge, experts' hands-on guidance is imperative. However, training novices by experts is costly and time-consuming, increasing the need for alternatives. Explainable artificial intelligence (XAI) has conventionally been used to make black-box artificial intelligence systems interpretable. In this work, we utilize XAI as an alternative: An (X)AI system is trained on experts' past decisions and is then employed to teach novices by providing examples coupled with explanations. In a study with 249 participants, we measure the effectiveness of such an approach for a classification task. We show that (X)AI-based learning systems are able to induce learning in novices and that their cognitive styles moderate learning. Thus, we take the first steps to reveal the impact of XAI on human learning and point AI developers to future options to tailor the design of (X)AI-based learning systems.
翻译:在许多高风险领域中,传统学习系统对初学者的训练效果有限。要传授隐性知识,专家的实践指导不可或缺。然而,专家培训模式成本高昂且耗时,亟需替代方案。可解释人工智能(XAI)传统上用于提升黑盒人工智能系统的可解释性。本研究创新性地将XAI作为替代方案:通过训练(X)AI系统学习专家的历史决策,使其能够结合案例与解释来指导初学者。我们通过一项包含249名参与者的分类任务研究,验证了该方法的有效性。研究表明:(X)AI学习系统能够有效促进初学者的知识获取,且学习效果受个体认知风格的调节。本研究首次揭示了XAI对人类学习的影响机制,为人工智能开发者定制(X)AI学习系统的设计指明了方向。