Robots and other artificial intelligence (AI) systems are widely perceived as moral agents responsible for their actions. As AI proliferates, these perceptions may become entangled via the moral spillover of attitudes towards one AI to attitudes towards other AIs. We tested how the seemingly harmful and immoral actions of an AI or human agent spill over to attitudes towards other AIs or humans in two preregistered experiments. In Study 1 (N = 720), we established the moral spillover effect in human-AI interaction by showing that immoral actions increased attributions of negative moral agency (i.e., acting immorally) and decreased attributions of positive moral agency (i.e., acting morally) and moral patiency (i.e., deserving moral concern) to both the agent (a chatbot or human assistant) and the group to which they belong (all chatbot or human assistants). There was no significant difference in the spillover effects between the AI and human contexts. In Study 2 (N = 684), we tested whether spillover persisted when the agent was individuated with a name and described as an AI or human, rather than specifically as a chatbot or personal assistant. We found that spillover persisted in the AI context but not in the human context, possibly because AIs were perceived as more homogeneous due to their outgroup status relative to humans. This asymmetry suggests a double standard whereby AIs are judged more harshly than humans when one agent morally transgresses. With the proliferation of diverse, autonomous AI systems, HCI research and design should account for the fact that experiences with one AI could easily generalize to perceptions of all AIs and negative HCI outcomes, such as reduced trust.
翻译:机器人与其他人工智能(AI)系统被广泛视为对其行为负有责任的道德主体。随着AI的普及,这种认知可能通过道德溢出效应而相互纠缠,即对一个AI的态度会蔓延至对其他AI的态度。我们在两项预先注册的实验中,测试了AI或人类主体看似有害及不道德的行为如何蔓延至对其他AI或人类的态度。在研究1(N = 720)中,我们通过展示不道德行为会增加对主体(聊天机器人或人类助手)及其所属群体(所有聊天机器人或所有人类助手)的负面道德能动性(即实施不道德行为)归因,并降低其正面道德能动性(即实施道德行为)与道德受动性(即值得道德关怀)的归因,从而确立了人机交互中的道德溢出效应。AI与人类情境下的溢出效应无显著差异。在研究2(N = 684)中,我们测试了当主体被赋予姓名并被描述为AI或人类(而非特指为聊天机器人或个人助手)时,溢出效应是否持续存在。我们发现溢出效应在AI情境中持续存在,但在人类情境中则不然,这可能是因为相对于人类,AI因其外群体地位而被感知为更具同质性。这种不对称性揭示了一种双重标准:当一个主体发生道德越界时,AI会比人类受到更严厉的评判。随着多样化、自主AI系统的激增,人机交互(HCI)研究与设计应考虑到,与一个AI的体验可能轻易泛化至对所有AI的认知,并导致负面的人机交互后果,例如信任度降低。