Reinforcement Learning (RL) is essential for evolving Large Language Models (LLMs) into autonomous agents capable of long-horizon planning, yet a practical recipe for scaling RL in complex, multi-turn environments remains elusive. This paper presents a systematic empirical study using TravelPlanner, a challenging testbed requiring tool orchestration to satisfy multifaceted constraints. We decompose the agentic RL design space along 5 axes: reward shaping, model scaling, data composition, algorithm selection, and environmental stability. Our controlled experiments yield 7 key takeaways, e.g., (1) reward and algorithm choices are scale-dependent as smaller models benefit from staged rewards and enhanced exploration, whereas larger models converge efficiently with simpler dense rewards, (2) ~ 1K training samples with a balanced difficulty mixture mark a sweet spot for both in-domain and out-of-domain performance, and (3) environmental stability is critical to prevent policy degradation. Based on our distilled recipe, our RL-trained models achieve state-of-the-art performance on TravelPlanner, significantly outperforming leading LLMs.
翻译:强化学习(RL)对于将大语言模型(LLMs)进化为能够进行长程规划的自主智能体至关重要,然而,在复杂的多轮交互环境中扩展强化学习的实用配方仍然难以捉摸。本文基于TravelPlanner——一个需要工具编排来满足多方面约束的挑战性测试平台——开展了系统的实证研究。我们将智能体强化学习的设计空间沿5个维度进行分解:奖励塑造、模型缩放、数据构成、算法选择和环境稳定性。受控实验得出了7项关键结论,例如:(1)奖励与算法选择具有规模依赖性——较小模型受益于分阶段奖励与增强探索,而较大模型通过更简单的密集奖励可高效收敛;(2)约1K个训练样本配合均衡的难度混合,构成了领域内与领域外性能的最佳平衡点;(3)环境稳定性对于防止策略退化至关重要。基于提炼出的配方,我们经强化学习训练的模型在TravelPlanner上达到了最优性能,显著领先于主流大语言模型。