With the growing adoption of agent-based models in policy evaluation, a pressing question arises: Can such systems effectively simulate and analyze complex social scenarios to inform policy decisions? Addressing this challenge could significantly enhance the policy-making process, offering researchers and practitioners a systematic way to validate, explore, and refine policy outcomes. To advance this goal, we introduce PolicySimEval, the first benchmark designed to evaluate the capability of agent-based simulations in policy assessment tasks. PolicySimEval aims to reflect the real-world complexities faced by social scientists and policymakers. The benchmark is composed of three categories of evaluation tasks: (1) 20 comprehensive scenarios that replicate end-to-end policy modeling challenges, complete with annotated expert solutions; (2) 65 targeted sub-tasks that address specific aspects of agent-based simulation (e.g., agent behavior calibration); and (3) 200 auto-generated tasks to enable large-scale evaluation and method development. Experiments show that current state-of-the-art frameworks struggle to tackle these tasks effectively, with the highest-performing system achieving only 24.5\% coverage rate on comprehensive scenarios, 15.04\% on sub-tasks, and 14.5\% on auto-generated tasks. These results highlight the difficulty of the task and the gap between current capabilities and the requirements for real-world policy evaluation.
翻译:随着基于智能体的模型在政策评估中的日益普及,一个紧迫问题随之产生:此类系统能否有效模拟和分析复杂社会场景以支撑政策决策?应对这一挑战将显著提升政策制定过程,为研究者和实践者提供验证、探索和完善政策效果的系统化方法。为实现这一目标,我们提出了PolicySimEval——首个专门评估基于智能体仿真在政策评估任务中能力的基准。PolicySimEval旨在反映社会科学家和政策制定者面临的现实复杂性。该基准包含三类评估任务:(1)20个复现端到端政策建模挑战的综合场景,均附有专家标注解决方案;(2)65个针对基于智能体仿真特定方面(如智能体行为校准)的专项子任务;(3)200个自动生成任务以支持大规模评估与方法开发。实验表明,当前最先进的框架难以有效处理这些任务,表现最佳的系统在综合场景中仅达到24.5%的覆盖率,在子任务中为15.04%,在自动生成任务中为14.5%。这些结果凸显了任务的艰巨性,以及当前能力与现实政策评估需求之间的差距。