We introduce an approach to evaluate language model (LM) agency using negotiation games. This approach better reflects real-world use cases and addresses some of the shortcomings of alternative LM benchmarks. Negotiation games enable us to study multi-turn, and cross-model interactions, modulate complexity, and side-step accidental evaluation data leakage. We use our approach to test six widely used and publicly accessible LMs, evaluating performance and alignment in both self-play and cross-play settings. Noteworthy findings include: (i) only closed-source models tested here were able to complete these tasks; (ii) cooperative bargaining games proved to be most challenging to the models; and (iii) even the most powerful models sometimes "lose" to weaker opponents
翻译:我们提出了一种利用谈判游戏评估语言模型代理能力的方法。该方法能更好地反映实际应用场景,并弥补了其他语言模型基准测试的一些不足。谈判游戏使我们能够研究多轮次、跨模型的交互,调节任务复杂度,并规避评估数据意外泄露的问题。我们运用该方法测试了六个广泛使用且可公开访问的语言模型,评估了它们在自我对弈和交叉对弈场景下的性能与对齐表现。值得注意的发现包括:(i)测试中仅有闭源模型能够完成这些任务;(ii)合作性议价游戏被证明对模型最具挑战性;(iii)即使是最强大的模型有时也会“输给”较弱的对手。