Large language models have increasingly been proposed as a powerful replacement for classical agent-based models (ABMs) to simulate social dynamics. By using LLMs as a proxy for human behavior, the hope of this new approach is to be able to simulate significantly more complex dynamics than with classical ABMs and gain new insights in fields such as social science, political science, and economics. However, due to the black box nature of LLMs, it is unclear whether LLM agents actually execute the intended semantics that are encoded in their natural language instructions and, if the resulting dynamics of interactions are meaningful. To study this question, we propose a new evaluation framework that grounds LLM simulations within the dynamics of established reference models of social science. By treating LLMs as a black-box function, we evaluate their input-output behavior relative to this reference model, which allows us to evaluate detailed aspects of their behavior. Our results show that, while it is possible to engineer prompts that approximate the intended dynamics, the quality of these simulations is highly sensitive to the particular choice of prompts. Importantly, simulations are even sensitive to arbitrary variations such as minor wording changes and whitespace. This puts into question the usefulness of current versions of LLMs for meaningful simulations, as without a reference model, it is impossible to determine a priori what impact seemingly meaningless changes in prompt will have on the simulation.
翻译:大型语言模型日益被提议作为经典基于智能体的模型(ABMs)的强大替代方案,用于模拟社会动态。通过将LLMs用作人类行为的代理,这种新方法的期望是能够模拟比经典ABMs显著更复杂的动态,并在社会科学、政治学和经济学等领域获得新的见解。然而,由于LLMs的黑箱性质,目前尚不清楚LLM智能体是否真正执行了编码在其自然语言指令中的预期语义,以及由此产生的交互动态是否具有意义。为研究这一问题,我们提出了一个新的评估框架,将LLM模拟置于已确立的社会科学参考模型的动态中进行验证。通过将LLMs视为黑箱函数,我们评估其相对于参考模型的输入-输出行为,从而能够评估其行为的详细特征。我们的结果表明,虽然可以通过设计提示词来近似预期动态,但这些模拟的质量对提示词的具体选择高度敏感。重要的是,模拟甚至对任意变化(如微小的措辞调整和空格)也表现出敏感性。这质疑了当前版本LLMs进行有意义模拟的实用性,因为在没有参考模型的情况下,无法先验地确定提示词中看似无意义的更改会对模拟产生何种影响。