Humans increasingly interact with artificial intelligence (AI) in decision-making. However, both AI and humans are prone to biases. While AI and human biases have been studied extensively in isolation, this paper examines their complex interaction. Specifically, we examined how class imbalance as an AI bias affects people's ability to appropriately rely on an AI-based decision-support system, and how it interacts with base rate neglect as a human bias. In a within-subject online study (N= 46), participants classified three diseases using an AI-based decision-support system trained on either a balanced or unbalanced dataset. We found that class imbalance disrupted participants' calibration of AI reliance. Moreover, we observed mutually reinforcing effects between class imbalance and base rate neglect, offering evidence of a compound human-AI bias. Based on these findings, we advocate for an interactionist perspective and further research into the mutually reinforcing effects of biases in human-AI interaction.
翻译:人类在决策过程中与人工智能(AI)的交互日益频繁。然而,AI与人类均易受偏见影响。尽管AI偏见与人类偏见已分别得到广泛研究,本文探讨了二者复杂的交互作用。具体而言,我们研究了作为AI偏见的类别不平衡如何影响人们对基于AI的决策支持系统的适度依赖能力,以及其如何与作为人类偏见的基础率忽视产生交互。在一项被试内在线研究(N=46)中,参与者使用基于AI的决策支持系统对三种疾病进行分类,该系统分别在平衡或不平衡数据集上训练。我们发现类别不平衡破坏了参与者对AI依赖的校准。此外,我们观察到类别不平衡与基础率忽视之间存在相互强化的效应,这为复合型人机偏见提供了证据。基于这些发现,我们倡导采用交互主义视角,并进一步研究人机交互中偏见的相互强化效应。