Large language model (LLM) simulations of human behavior have the potential to revolutionize the social and behavioral sciences, if and only if they faithfully reflect real human behaviors. Current evaluations are fragmented, based on bespoke tasks and metrics, creating a patchwork of incomparable results. To address this, we introduce SimBench, the first large-scale, standardized benchmark for a robust, reproducible science of LLM simulation. By unifying 20 diverse datasets covering tasks from moral decision-making to economic choice across a large global participant pool, SimBench provides the necessary foundation to ask fundamental questions about when, how, and why LLM simulations succeed or fail. We show that, while even the best LLMs today have limited simulation ability (score: 40.80/100), performance scales log-linearly with model size. Simulation performance is not improved by increased inference-time compute. We demonstrate an alignment-simulation trade-off: instruction-tuning improves performance on low-entropy (consensus) questions but degrades it on high-entropy (diverse) ones. Models particularly struggle when simulating specific demographic groups. Finally, we demonstrate that simulation ability correlates most strongly with deep, knowledge-intensive reasoning (MMLU-Pro, r=0.939). By making progress measurable, we aim to accelerate the development of more faithful LLM simulators.
翻译:大型语言模型(LLM)对人类行为的模拟有潜力彻底改变社会与行为科学,但前提是它们必须真实反映人类行为。目前的评估体系较为零散,基于定制任务和指标,导致结果碎片化且难以比较。为解决这一问题,我们提出了SimBench——首个面向LLM模拟科学的大规模标准化基准,旨在建立稳健、可复现的研究体系。通过整合涵盖道德决策到经济选择等20个多样化任务数据集(涉及全球大规模参与者群体),SimBench为探究LLM模拟何时、如何及为何成功或失败提供了必要基础。研究表明,当前最先进的LLM模拟能力仍有限(得分:40.80/100),但性能随模型规模呈对数线性增长。增加推理时计算量并未提升模拟性能。我们揭示了对齐与模拟之间的权衡:指令微调能提升低熵(共识性)问题的表现,却会降低高熵(多样性)问题的性能。模型在模拟特定人口统计学群体时表现尤为困难。最后,我们发现模拟能力与深层次、知识密集型推理能力相关性最强(MMLU-Pro,r=0.939)。通过建立可量化的评估体系,我们旨在加速开发更真实可信的LLM模拟器。