Existing Agent benchmarks suffer from two critical limitations: high environment interaction overhead (up to 41\% of total evaluation time) and imbalanced task horizon and difficulty distributions that make aggregate scores unreliable. To address these issues, we propose ACE-Bench built around a unified grid-based planning task, where agents must fill hidden slots in a partially completed schedule subject to both local slot constraints and global constraints. Our benchmark offers fine-grained control through two orthogonal axes: Scalable Horizons, controlled by the number of hidden slots $H$, and Controllable Difficulty, governed by a decoy budget $B$ that determines the number of globally misleading decoy candidates. Crucially, all tool calls are resolved via static JSON files under a Lightweight Environment design, eliminating setup overhead and enabling fast, reproducible evaluation suitable for training-time validation. We first validate that H and B provide reliable control over task horizon and difficulty, and that ACE-Bench exhibits strong domain consistency and model discriminability. We then conduct comprehensive experiments across 13 models of diverse sizes and families over 6 domains, revealing significant cross-model performance variation and confirming that ACE-Bench provides interpretable and controllable evaluation of agent reasoning.
翻译:现有智能体基准测试存在两个关键局限:高环境交互开销(高达总评测时间的41%)以及任务视野与难度分布不均衡导致聚合评分不可靠。为解决这些问题,我们提出ACE-Bench,其核心构建于统一的基于网格的规划任务——智能体必须根据局部插槽约束与全局约束,在部分完成的日程中填充隐藏插槽。该基准通过两个正交维度实现细粒度控制:由隐藏插槽数量H控制的可扩展视野,以及由诱饵预算B(决定全局误导性诱饵候选数量)调控的可控难度。关键创新在于,所有工具调用均通过轻量环境设计下的静态JSON文件解析,消除了部署开销,实现了适用于训练阶段验证的快速可重复评测。我们首先验证H与B能可靠控制任务视野与难度,且ACE-Bench展现出强领域一致性与模型区分度。随后在6个领域对13个不同规模与系列的模型进行综合实验,揭示了显著的跨模型性能差异,并证实ACE-Bench能为智能体推理提供可解释且可控的评测。