Reinforcement learning (RL) is a critical stage in post-training large language models (LLMs), involving repeated interaction between rollout generation, reward evaluation, and centralized learning. Distributing rollout execution offers opportunities to leverage more cost-efficient inference resources, but introduces challenges in wide-area coordination and policy dissemination. We present ECHO-2, a distributed RL framework for post-training with remote inference workers and non-negligible dissemination latency. ECHO-2 combines centralized learning with distributed rollouts and treats bounded policy staleness as a user-controlled parameter, enabling rollout generation, dissemination, and training to overlap. We introduce an overlap-based capacity model that relates training time, dissemination latency, and rollout throughput, yielding a practical provisioning rule for sustaining learner utilization. To mitigate dissemination bottlenecks and lower cost, ECHO-2 employs peer-assisted pipelined broadcast and cost-aware activation of heterogeneous workers. Experiments on GRPO post-training of 4B and 8B models under real wide-area bandwidth regimes show that ECHO-2 significantly improves cost efficiency while preserving RL reward comparable to strong baselines.
翻译:强化学习(RL)是大型语言模型(LLM)后训练的关键阶段,涉及策略执行生成、奖励评估与集中式学习之间的反复交互。分布式执行策略执行为利用更具成本效益的推理资源提供了机会,但也带来了广域协调与策略传播方面的挑战。我们提出了ECHO-2,这是一个用于后训练的分布式RL框架,它包含远程推理工作节点且具有不可忽略的策略传播延迟。ECHO-2将集中式学习与分布式策略执行相结合,并将有界的策略陈旧性视为用户可控参数,从而使策略执行生成、传播与训练能够重叠进行。我们提出了一种基于重叠的容量模型,该模型关联了训练时间、传播延迟与策略执行吞吐量,从而为维持学习器利用率提供了一种实用的资源配置规则。为缓解传播瓶颈并降低成本,ECHO-2采用了同伴辅助的流水线广播以及对异构工作节点的成本感知激活机制。在真实广域带宽条件下,对4B和8B模型进行GRPO后训练的实验中,ECHO-2在保持与强基线相当的RL奖励的同时,显著提高了成本效益。