Recent advances in large language models (LLMs) have greatly improved their reasoning and decision-making abilities when deployed as agents. Richer reasoning, however, often comes at the cost of longer chain of thought (CoT), hampering interaction efficiency in real-world scenarios. Nevertheless, there still lacks systematic definition of LLM agent efficiency, hindering targeted improvements. To this end, we introduce dual-efficiency, comprising (i) step-level efficiency, which minimizes tokens per step, and (ii) trajectory-level efficiency, which minimizes the number of steps to complete a task. Building on this definition, we propose DEPO, a dual-efficiency preference optimization method that jointly rewards succinct responses and fewer action steps. Experiments on WebShop and BabyAI show that DEPO cuts token usage by up to 60.9% and steps by up to 26.9%, while achieving up to a 29.3% improvement in performance. DEPO also generalizes to three out-of-domain math benchmarks and retains its efficiency gains when trained on only 25% of the data. Our project page is at https://opencausalab.github.io/DEPO.
翻译:近期,大语言模型(LLMs)在作为智能体部署时,其推理与决策能力得到了显著提升。然而,更丰富的推理往往伴随着更长的思维链(CoT),这在现实场景中会降低交互效率。目前,仍缺乏对大语言模型智能体效率的系统性定义,这阻碍了针对性的改进。为此,我们提出了双效概念,包括:(i)步骤级效率,旨在最小化每个步骤的令牌消耗;(ii)轨迹级效率,旨在最小化完成任务所需的步骤数。基于此定义,我们提出了DEPO,一种双效偏好优化方法,该方法联合奖励简洁的响应和更少的行动步骤。在WebShop和BabyAI上的实验表明,DEPO能将令牌使用量减少高达60.9%,步骤数减少高达26.9%,同时性能提升高达29.3%。DEPO还能泛化至三个领域外数学基准测试,并且在仅使用25%数据训练时仍能保持其效率增益。我们的项目页面位于 https://opencausalab.github.io/DEPO。