Explainable Artificial Intelligence (XAI) systems aim to improve users' understanding of AI but rarely consider the inclusivity aspects of XAI. Without inclusive approaches, improving explanations might not work well for everyone. This study investigates leveraging users' diverse problem-solving styles as an inclusive strategy to fix an XAI prototype, with the ultimate goal of improving users' mental models of AI. We ran a between-subject study with 69 participants. Our results show that the inclusivity fixes increased participants' engagement with explanations and produced significantly improved mental models. Analyzing differences in mental model scores further highlighted specific inclusivity fixes that contributed to the significant improvement in the mental model.
翻译:可解释人工智能(XAI)系统旨在提升用户对人工智能的理解,但极少考虑XAI的包容性方面。缺乏包容性方法时,改进解释可能无法适用于所有人群。本研究探索利用用户多样的问题解决风格作为包容性策略来修复XAI原型,最终目标是改善用户对AI的心理模型。我们开展了一项包含69名参与者的组间实验。研究结果显示,包容性修复增加了参与者对解释的参与度,并显著提升了其心理模型。通过对心理模型评分差异的分析,进一步揭示了促成心理模型显著改进的具体包容性修复措施。