Explainable AI (XAI) aims to support appropriate human-AI reliance by increasing the interpretability of complex model decisions. Despite the proliferation of proposed methods, there is mixed evidence surrounding the effects of different styles of XAI explanations on human-AI reliance. Interpreting these conflicting findings requires an understanding of the individual and combined qualities of different explanation styles that influence appropriate and inappropriate human-AI reliance, and the role of interpretability in this interaction. In this study, we investigate the influences of feature-based, example-based, and combined feature- and example-based XAI methods on human-AI reliance through a two-part experimental study with 274 participants comparing these explanation style conditions. Our findings suggest differences between feature-based and example-based explanation styles beyond interpretability that affect human-AI reliance patterns across differences in individual performance and task complexity. Our work highlights the importance of adapting explanations to their specific users and context over maximising broad interpretability.
翻译:可解释人工智能(XAI)旨在通过增强复杂模型决策的可解释性,以支持合理的人机依赖关系。尽管已有大量XAI方法被提出,但不同风格的XAI解释对人机依赖的影响仍存在不一致的证据。理解这些相互矛盾的研究结果,需要厘清不同解释风格中影响合理与不合理人机依赖的个体及组合特性,以及可解释性在此交互中的作用。本研究通过一项包含274名参与者的两部分对照实验,探究了基于特征、基于示例以及特征与示例相结合的XAI方法对人机依赖的影响。我们的研究结果表明,基于特征与基于示例的解释风格在可解释性之外存在差异,这些差异会随着个体表现差异与任务复杂度的变化而影响人机依赖模式。本研究强调,相较于追求广泛的可解释性,使解释适配特定用户与具体情境更为重要。