Agentic reinforcement learning (RL) has emerged as a transformative workload in cloud clusters, enabling large language models (LLMs) to solve complex problems through interactions with real world. However, unlike traditional RL, agentic RL demands substantial external cloud resources, e.g., CPUs for code execution and GPUs for reward models, that exist outside the primary training cluster. Existing agentic RL framework typically rely on static over-provisioning, i.e., resources are often tied to long-lived trajectories or isolated by tasks, which leads to severe resource inefficiency. We propose the action-level orchestration, and incorporate it into ARL-Tangram, a unified resource management system that enables fine-grained external resource sharing and elasticity. ARL-Tangram utilizes a unified action-level formulation and an elastic scheduling algorithm to minimize action completion time (ACT) while satisfying heterogeneous resource constraints. Further, heterogeneous resource managers are tailored to efficiently support the action-level execution on resources with heterogeneous characteristics and topologies. Evaluation on real-world agentic RL tasks demonstrates that ARL-Tangram improves average ACT by up to 4.3$\times$, speeds up the step duration of RL training by up to 1.5$\times$, and saves the external resources by up to 71.2$\%$. This system has been deployed to support the training of the MiMo series models.
翻译:智能体强化学习已成为云集群中的一种变革性工作负载,它使大型语言模型能够通过与现实世界交互来解决复杂问题。然而,与传统强化学习不同,智能体强化学习需要大量存在于主训练集群之外的外部云资源,例如用于代码执行的CPU和用于奖励模型的GPU。现有的智能体强化学习框架通常依赖于静态的超量供应,即资源通常与长生命周期的轨迹绑定或被任务隔离,这导致了严重的资源低效。我们提出了动作级编排方法,并将其融入ARL-Tangram——一个实现细粒度外部资源共享与弹性的统一资源管理系统。ARL-Tangram利用统一的动作级表述和弹性调度算法,在满足异构资源约束的同时最小化动作完成时间。此外,系统定制了异构资源管理器,以高效支持在具有异构特性与拓扑结构的资源上进行动作级执行。在真实世界智能体强化学习任务上的评估表明,ARL-Tangram将平均动作完成时间提升高达4.3倍,将强化训练的单步时长加速高达1.5倍,并节省高达71.2%的外部资源。该系统已部署用于支持MiMo系列模型的训练。