In hybrid populations where humans delegate strategic decision-making to autonomous agents, understanding when and how cooperative behaviors can emerge remains a key challenge. We study this problem in the context of energy load management: consumer agents schedule their appliance use under demand-dependent pricing. This structure can create a social dilemma where everybody would benefit from coordination, but in equilibrium agents often choose to incur the congestion costs that cooperative turn-taking would avoid. To address the problem of coordination, we introduce artificial agents that use globally observable signals to increase coordination. Using evolutionary dynamics, and reinforcement learning experiments, we show that artificial agents can shift the learning dynamics to favour coordination outcomes. An often neglected problem is partial adoption: what happens when the technology of artificial agents is in the early adoption stages? We analyze mixed populations of adopters and non-adopters, demonstrating that unilateral entry is feasible: adopters are not structurally penalized, and partial adoption can still improve aggregate outcomes. However, in some parameter regimes, non-adopters may benefit disproportionately from the cooperation induced by adopters. This asymmetry, while not precluding beneficial entry, warrants consideration in deployment, and highlights strategic issues around the adoption of AI technology in multiagent settings.
翻译:在人类将战略决策委托给自主智能体的混合群体中,理解合作行为何时以及如何产生仍然是一个关键挑战。我们在能源负荷管理的背景下研究这一问题:消费者智能体在需求依赖型定价下安排其电器使用。这种结构可能产生一种社会困境——协调能使所有人受益,但在均衡状态下,智能体往往选择承担拥堵成本,而合作性的轮流机制本可避免这些成本。为解决协调问题,我们引入利用全局可观测信号来增强协调的人工智能体。通过演化动力学和强化学习实验,我们证明人工智能体能够改变学习动态,从而促进协调结果。一个常被忽视的问题是部分采用:当人工智能体技术处于早期采用阶段时会发生什么?我们分析了采用者与非采用者的混合群体,证明单边进入是可行的:采用者不会在结构上受到惩罚,部分采用仍能改善总体结果。然而,在某些参数机制下,非采用者可能从采用者所诱导的合作中不成比例地获益。这种不对称性虽不排除有益的技术进入,但在实际部署中值得关注,并凸显了在多智能体环境中采用人工智能技术所涉及的战略性问题。