Explainability techniques are rapidly being developed to improve human-AI decision-making across various cooperative work settings. Consequently, previous research has evaluated how decision-makers collaborate with imperfect AI by investigating appropriate reliance and task performance with the aim of designing more human-centered computer-supported collaborative tools. Several human-centered explainable AI (XAI) techniques have been proposed in hopes of improving decision-makers' collaboration with AI; however, these techniques are grounded in findings from previous studies that primarily focus on the impact of incorrect AI advice. Few studies acknowledge the possibility of the explanations being incorrect even if the AI advice is correct. Thus, it is crucial to understand how imperfect XAI affects human-AI decision-making. In this work, we contribute a robust, mixed-methods user study with 136 participants to evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task, taking into account their level of expertise and an explanation's level of assertiveness. Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance. We also discuss how explanations can deceive decision-makers during human-AI collaboration. Hence, we shed light on the impacts of imperfect XAI in the field of computer-supported cooperative work and provide guidelines for designers of human-AI collaboration systems.
翻译:可解释性技术正在快速发展,旨在改善各类协作工作场景中的人机协同决策。为此,先前研究通过考察决策者对不完美AI的合理信赖与任务绩效,以设计更以人为中心的计算机支持协作工具。尽管已有若干以人为中心的可解释AI(XAI)技术被提出,以期提升决策者与AI的协作能力,但这些技术所依赖的研究基础主要聚焦于错误AI建议的影响。很少有研究关注到,即使AI建议正确,其解释仍可能出错的可能性。因此,理解不完美的XAI如何影响人机协同决策至关重要。本研究通过一项包含136名参与者的稳健混合方法用户实验,评估了在鸟类物种识别任务中,错误解释如何影响人类决策行为,并考虑了用户的专业水平与解释的确定程度。研究结果揭示了不完美XAI与人类专业水平对决策者依赖AI程度及人机团队绩效的影响。我们还探讨了在人机协作中,解释如何误导决策者。由此,本研究揭示了不完美XAI在计算机支持协作工作领域的影响,并为设计人机协作系统的开发者提供了指导原则。