The introduction of artificial intelligence (AI) agents into human group settings raises essential questions about how these novel participants influence cooperative social norms. While previous studies on human-AI cooperation have primarily focused on dyadic interactions, little is known about how integrating AI agents affects the emergence and maintenance of cooperative norms in small groups. This study addresses this gap through an online experiment using a repeated four-player Public Goods Game (PGG). Each group consisted of three human participants and one bot, which was framed either as human or AI and followed one of three predefined decision strategies: unconditional cooperation, conditional cooperation, or free-riding. In our sample of 236 participants, we found that reciprocal group dynamics and behavioural inertia primarily drove cooperation. These normative mechanisms operated identically across conditions, resulting in cooperation levels that did not differ significantly between human and AI labels. Furthermore, we found no evidence of differences in norm persistence in a follow-up Prisoner's Dilemma, or in participants' normative perceptions. Participants' behaviour followed the same normative logic across human and AI conditions, indicating that cooperation depended on group behaviour rather than partner identity. This supports a pattern of normative equivalence, in which the mechanisms that sustain cooperation function similarly in mixed human-AI and all human groups. These findings suggest that cooperative norms are flexible enough to extend to artificial agents, blurring the boundary between humans and AI in collective decision-making.
翻译:人工智能(AI)智能体引入人类群体环境引发了关键问题:这些新型参与者如何影响合作性社会规范。尽管先前关于人机协作的研究主要集中于二元互动,但对于AI智能体的融入如何影响小群体中合作规范的形成与维持,目前知之甚少。本研究通过在线实验,采用重复四玩家公共物品博弈(PGG)来填补这一空白。每个群体由三名人类参与者和一个机器人组成,该机器人被设定为人类或AI身份,并遵循三种预定义决策策略之一:无条件合作、条件合作或搭便车。在我们包含236名参与者的样本中,我们发现互惠性群体动态和行为惯性是驱动合作的主要因素。这些规范机制在不同实验条件下以相同方式运作,导致合作水平在人类与AI标签之间无显著差异。此外,在后续囚徒困境实验中,我们未发现规范持续性的差异,也未发现参与者规范认知的差异。参与者在人类与AI条件下的行为遵循相同的规范逻辑,表明合作取决于群体行为而非伙伴身份。这支持了规范等价性模式,即维持合作的机制在人类-AI混合群体与纯人类群体中以相似方式运作。这些发现表明合作规范具有足够灵活性,可延伸至人工智能体,从而模糊了集体决策中人类与AI之间的界限。