LLM-based multi-agent systems (LLM-MAS), in which autonomous AI agents cooperate to solve tasks, are gaining increasing attention. For such systems to be deployed in society, agents must be able to establish cooperation and coordination under real-world computational and communication constraints. We propose the FLCOA framework (Five Layers for Cooperation/Coordination among Autonomous Agents) to conceptualize how cooperation and coordination emerge in groups of autonomous agents, and highlight that the influence of lower-layer factors - especially computational and communication resources - has been largely overlooked. To examine the effect of communication delay, we introduce a Continuous Prisoner's Dilemma with Communication Delay and conduct simulations with LLM-based agents. As delay increases, agents begin to exploit slower responses even without explicit instructions. Interestingly, excessive delay reduces cycles of exploitation, yielding a U-shaped relationship between delay magnitude and mutual cooperation. These results suggest that fostering cooperation requires attention not only to high-level institutional design but also to lower-layer factors such as communication delay and resource allocation, pointing to new directions for MAS research.
翻译:基于大型语言模型的多智能体系统(LLM-MAS)——即自主AI智能体通过协作解决任务——正受到越来越多的关注。为使此类系统能够部署于社会环境中,智能体必须能够在现实世界的计算与通信约束下建立合作与协调机制。我们提出FLCOA框架(自主智能体合作/协调五层模型)以概念化自主智能体群体中合作与协调的形成过程,并指出底层因素——特别是计算与通信资源——的影响在很大程度上被忽视了。为探究通信延迟的影响,我们引入了带通信延迟的连续囚徒困境博弈,并采用基于LLM的智能体进行模拟实验。随着延迟增加,即使没有明确指令,智能体也会开始利用响应较慢的对手。有趣的是,过度延迟反而会减少剥削循环,导致延迟程度与相互合作之间呈现U型关系。这些结果表明,促进合作不仅需要关注高层制度设计,还需重视通信延迟和资源分配等底层因素,这为多智能体系统研究指出了新的方向。