Explainable Artificial Intelligence (XAI) systems aim to improve users' understanding of AI but rarely consider the inclusivity aspects of XAI. Without inclusive approaches, improving explanations might not work well for everyone. This study investigates leveraging users' diverse problem-solving styles as an inclusive strategy to fix an XAI prototype, with the ultimate goal of improving users' mental models of AI. We ran a between-subject study with 69 participants. Our results show that the inclusivity fixes increased participants' engagement with explanations and produced significantly improved mental models. Analyzing differences in mental model scores further highlighted specific inclusivity fixes that contributed to the significant improvement in the mental model. To our surprise, the inclusivity fixes did not improve participants' prediction performance. However, the fixes did improve inclusivity support for women and promoted equity by reducing the gender gap.
翻译:可解释人工智能(XAI)系统旨在提升用户对AI的理解,但很少考虑XAI的包容性维度。若缺乏包容性方法,解释效果的提升可能无法普适于所有用户。本研究探索以用户多样化的问题解决风格作为包容性策略来改进XAI原型系统,最终目标是优化用户对AI的心智模型。我们开展了包含69名参与者的组间对照实验。结果表明:包容性改进显著提升了参与者对解释内容的参与度,并产生了明显改善的心智模型。通过对心智模型得分的差异分析,进一步明确了促成心智模型显著提升的具体包容性改进措施。令人意外的是,包容性改进并未提升参与者的预测性能,但这些改进确实增强了对女性用户的包容性支持,并通过缩小性别差距促进了公平性。