Large Language Models (LLMs) have increasingly been utilized in social simulations, where they are often guided by carefully crafted instructions to stably exhibit human-like behaviors during simulations. Nevertheless, we doubt the necessity of shaping agents' behaviors for accurate social simulations. Instead, this paper emphasizes the importance of spontaneous phenomena, wherein agents deeply engage in contexts and make adaptive decisions without explicit directions. We explored spontaneous cooperation across three competitive scenarios and successfully simulated the gradual emergence of cooperation, findings that align closely with human behavioral data. This approach not only aids the computational social science community in bridging the gap between simulations and real-world dynamics but also offers the AI community a novel method to assess LLMs' capability of deliberate reasoning.
翻译:大型语言模型(LLM)在社会模拟中的应用日益广泛,通常通过精心设计的指令引导其在模拟过程中稳定展现类人行为。然而,我们质疑为精确社会模拟而塑造智能体行为的必要性。本文强调自发现象的重要性——即智能体深度融入情境并在无明确指令情况下做出自适应决策的现象。我们在三种竞争场景中探究了自发合作行为,成功模拟出合作关系的渐进形成过程,所得发现与人类行为数据高度吻合。该方法不仅有助于计算社会科学领域弥合模拟与现实动态之间的鸿沟,也为人工智能领域提供了一种评估LLM深思熟虑推理能力的新途径。