The evolution of cooperation has been extensively studied using abstract mathematical models and simulations. Recent advances in Large Language Models (LLMs) and the rise of LLM agents have demonstrated their ability to perform social reasoning, thus providing an opportunity to test the emergence of norms in more realistic agent-based simulations with human-like reasoning using natural language. In this research, we investigate whether the cooperation dynamics presented in Boyd and Richerson's model persist in a more realistic simulation of the Diner's Dilemma using LLM agents compared to the abstract mathematical nature in the work of Boyd and Richerson. Our findings indicate that agents follow the strategies defined in the Boyd and Richerson model, and explicit punishment mechanisms drive norm emergence, reinforcing cooperative behaviour even when the agent strategy configuration varies. Our results suggest that LLM-based Multi-Agent System simulations, in fact, can replicate the evolution of cooperation predicted by the traditional mathematical models. Moreover, our simulations extend beyond the mathematical models by integrating natural language-driven reasoning and a pairwise imitation method for strategy adoption, making them a more realistic testbed for cooperative behaviour in MASs.
翻译:合作的演化长期以来通过抽象的数学模型与仿真进行广泛研究。大型语言模型(LLM)的最新进展及LLM智能体的兴起,已证明其具备社会推理能力,这为在更贴近现实的基于智能体的仿真中,通过自然语言驱动类人推理来检验规范涌现提供了契机。本研究探讨了在基于LLM智能体的“晚餐困境”这一更现实的仿真中,Boyd与Richerson模型所呈现的合作动力学是否依然成立,并与原研究中抽象数学模型的性质进行对比。研究结果表明,智能体遵循Boyd与Richerson模型中定义的策略,且显式的惩罚机制能够驱动规范涌现,即使在智能体策略配置发生变化时仍能强化合作行为。我们的结果说明,基于LLM的多智能体系统仿真实际上能够复现传统数学模型所预测的合作演化过程。此外,我们的仿真通过整合自然语言驱动的推理及策略采纳的成对模仿方法,超越了数学模型的局限,为多智能体系统中的合作行为研究提供了更贴近现实的实验平台。