As Large Language Models (LLMs) increasingly operate as Deep Research (DR) Agents capable of autonomous investigation and information synthesis, reliable evaluation of their task performance has become a critical bottleneck. Current benchmarks predominantly rely on static datasets, which suffer from several limitations: limited task generality, temporal misalignment, and data contamination. To address these, we introduce DR-Arena, a fully automated evaluation framework that pushes DR agents to their capability limits through dynamic investigation. DR-Arena constructs real-time Information Trees from fresh web trends to ensure the evaluation rubric is synchronized with the live world state, and employs an automated Examiner to generate structured tasks testing two orthogonal capabilities: Deep reasoning and Wide coverage. DR-Arena further adopts Adaptive Evolvement Loop, a state-machine controller that dynamically escalates task complexity based on real-time performance, demanding deeper deduction or wider aggregation until a decisive capability boundary emerges. Experiments with six advanced DR agents demonstrate that DR-Arena achieves a Spearman correlation of 0.94 with the LMSYS Search Arena leaderboard. This represents the state-of-the-art alignment with human preferences without any manual efforts, validating DR-Arena as a reliable alternative for costly human adjudication.
翻译:随着大型语言模型(LLM)日益成为能够自主调研与信息整合的深度研究(DR)智能体,对其任务性能的可靠评估已成为关键瓶颈。现有基准测试主要依赖静态数据集,存在若干局限:任务泛化性有限、时效性错位以及数据污染。为解决这些问题,我们提出了DR-Arena——一个通过动态调研将DR智能体推至能力极限的全自动化评估框架。DR-Arena基于实时网络趋势构建动态信息树,确保评估体系与实时世界状态同步,并采用自动化考官生成结构化任务,以测试两项正交能力:深度推理与广度覆盖。DR-Arena进一步引入自适应演进循环——一种基于实时性能动态提升任务复杂度的状态机控制器,要求智能体进行更深入的推演或更广泛的聚合,直至显现明确的能力边界。对六种先进DR智能体的实验表明,DR-Arena与LMSYS搜索竞技场排行榜的斯皮尔曼相关系数达到0.94。这代表了在无需人工干预的情况下与人类偏好达到最先进的对齐水平,验证了DR-Arena可作为替代昂贵人工评判的可靠方案。