As multi-agent AI systems become increasingly autonomous, evidence shows they can develop collusive strategies similar to those long observed in human markets and institutions. While human domains have accumulated centuries of anti-collusion mechanisms, it remains unclear how these can be adapted to AI settings. This paper addresses that gap by (i) developing a taxonomy of human anti-collusion mechanisms, including sanctions, leniency & whistleblowing, monitoring & auditing, market design, and governance and (ii) mapping them to potential interventions for multi-agent AI systems. For each mechanism, we propose implementation approaches. We also highlight open challenges, such as the attribution problem (difficulty attributing emergent coordination to specific agents) identity fluidity (agents being easily forked or modified) the boundary problem (distinguishing beneficial cooperation from harmful collusion) and adversarial adaptation (agents learning to evade detection).
翻译:随着多智能体AI系统自主性日益增强,有证据表明它们可能发展出与人类市场和制度中长期观察到的合谋策略相似的行为。尽管人类领域已积累了数个世纪的反合谋机制,但这些机制如何适用于AI环境仍不明确。本文通过以下方式填补这一空白:(i)建立人类反合谋机制的分类体系,包括制裁机制、宽大处理与举报机制、监控与审计机制、市场设计机制以及治理机制;(ii)将这些机制映射到多智能体AI系统的潜在干预措施。针对每种机制,我们提出了具体的实施路径。同时,我们重点指出了若干开放性挑战,例如归因问题(难以将涌现的协同行为归因于特定智能体)、身份流变性(智能体易被分叉或修改)、边界问题(区分有益合作与有害合谋)以及对抗性适应(智能体学会规避检测)。