Large language model (LLM) simulations of human behavior have the potential to revolutionize the social and behavioral sciences, if and only if they faithfully reflect real human behaviors. Current evaluations are fragmented, based on bespoke tasks and metrics, creating a patchwork of incomparable results. To address this, we introduce SimBench, the first large-scale, standardized benchmark for a robust, reproducible science of LLM simulation. By unifying 20 diverse datasets covering tasks from moral decision-making to economic choice across a large global participant pool, SimBench provides the necessary foundation to ask fundamental questions about when, how, and why LLM simulations succeed or fail. We show that, while even the best LLMs today have limited simulation ability (score: 40.80/100), performance scales log-linearly with model size. Simulation performance is not improved by increased inference-time compute. We demonstrate an alignment-simulation trade-off: instruction-tuning improves performance on low-entropy (consensus) questions but degrades it on high-entropy (diverse) ones. Models particularly struggle when simulating specific demographic groups. Finally, we demonstrate that simulation ability correlates most strongly with deep, knowledge-intensive reasoning (MMLU-Pro, r=0.939). By making progress measurable, we aim to accelerate the development of more faithful LLM simulators.
翻译:大语言模型对人类行为的模拟有潜力彻底改变社会与行为科学,前提是它们必须真实反映人类行为。当前的评估体系零散,基于定制化任务与指标,导致结果碎片化且难以比较。为此,我们提出了SimBench——首个面向LLM模拟研究的规模化、标准化基准,旨在建立稳健且可复现的科学评估体系。通过整合涵盖道德决策到经济选择等任务的20个多样化数据集,并基于全球大规模参与者样本,SimBench为探究LLM模拟何时、如何及为何成功或失败提供了必要基础。研究表明,即使当前最优LLM的模拟能力仍有限(得分:40.80/100),其性能随模型规模呈对数线性增长。增加推理时计算量并未提升模拟性能。我们揭示了对齐-模拟权衡现象:指令微调能提升低熵(共识性)问题的表现,却会降低高熵(多样性)问题的性能。模型在模拟特定人口统计群体时表现尤为困难。最后,我们证明模拟能力与深度知识密集型推理能力相关性最强(MMLU-Pro,r=0.939)。通过建立可量化的进展评估体系,我们致力于加速开发更真实的LLM模拟器。