The behavior of Large Language Models (LLMs) as artificial social agents is largely unexplored, and we still lack extensive evidence of how these agents react to simple social stimuli. Testing the behavior of AI agents in classic Game Theory experiments provides a promising theoretical framework for evaluating the norms and values of these agents in archetypal social situations. In this work, we investigate the cooperative behavior of Llama2 when playing the Iterated Prisoner's Dilemma against random adversaries displaying various levels of hostility. We introduce a systematic methodology to evaluate an LLM's comprehension of the game's rules and its capability to parse historical gameplay logs for decision-making. We conducted simulations of games lasting for 100 rounds, and analyzed the LLM's decisions in terms of dimensions defined in behavioral economics literature. We find that Llama2 tends not to initiate defection but it adopts a cautious approach towards cooperation, sharply shifting towards a behavior that is both forgiving and non-retaliatory only when the opponent reduces its rate of defection below 30%. In comparison to prior research on human participants, Llama2 exhibits a greater inclination towards cooperative behavior. Our systematic approach to the study of LLMs in game theoretical scenarios is a step towards using these simulations to inform practices of LLM auditing and alignment.
翻译:大型语言模型作为人工社会主体的行为在很大程度上尚未被探索,我们仍缺乏这些主体如何应对简单社会刺激的广泛证据。在经典博弈论实验中测试人工智能主体的行为,为评估这些主体在典型社会情境中的规范与价值提供了有前景的理论框架。本研究探究了Llama2在与表现出不同敌意水平的随机对手进行重复囚徒困境博弈时的合作行为。我们提出了一种系统化方法,用于评估大型语言模型对游戏规则的理解及其解析历史游戏记录以进行决策的能力。我们模拟了持续100轮的游戏,并从行为经济学文献定义的维度分析了该模型的决策特征。研究发现,Llama2倾向于不主动背叛,但对合作持谨慎态度;仅当对手的背叛率降至30%以下时,才会急剧转向既宽容又不报复的行为模式。与先前对人类参与者的研究相比,Llama2表现出更强的合作倾向。我们在博弈论场景中研究大型语言模型的系统化方法,是朝着利用此类模拟为大型语言模型审计与对齐实践提供参考迈出的一步。