Reinforcement learning (RL) is a critical stage in post-training large language models (LLMs), involving repeated interaction between rollout generation, reward evaluation, and centralized learning. Distributing rollout execution offers opportunities to leverage more cost-efficient inference resources, but introduces challenges in wide-area coordination and policy dissemination. We present ECHO-2, a distributed RL framework for post-training with remote inference workers and non-negligible dissemination latency. ECHO-2 combines centralized learning with distributed rollouts and treats bounded policy staleness as a user-controlled parameter, enabling rollout generation, dissemination, and training to overlap. We introduce an overlap-based capacity model that relates training time, dissemination latency, and rollout throughput, yielding a practical provisioning rule for sustaining learner utilization. To mitigate dissemination bottlenecks and lower cost, ECHO-2 employs peer-assisted pipelined broadcast and cost-aware activation of heterogeneous workers. Experiments on GRPO post-training of 4B and 8B models under real wide-area bandwidth regimes show that ECHO-2 significantly improves cost efficiency while preserving RL reward comparable to strong baselines.
翻译:强化学习(RL)是大语言模型(LLM)后训练的关键阶段,涉及推演生成、奖励评估与集中式学习之间的反复交互。分布式执行推演提供了利用更具成本效益的推理资源的机会,但也带来了广域协调与策略传播方面的挑战。我们提出了ECHO-2,这是一个用于后训练的分布式RL框架,它包含远程推理工作节点且具有不可忽略的传播延迟。ECHO-2将集中式学习与分布式推演相结合,并将有界的策略陈旧性作为用户可控参数,从而使推演生成、传播和训练能够重叠进行。我们引入了一个基于重叠的容量模型,该模型关联了训练时间、传播延迟和推演吞吐量,并产生了一个维持学习器利用率的实用资源配置规则。为了缓解传播瓶颈并降低成本,ECHO-2采用了同伴辅助的流水线广播以及对异构工作节点的成本感知激活机制。在真实广域带宽环境下对4B和8B模型进行的GRPO后训练实验表明,ECHO-2在保持与强基线相当的RL奖励的同时,显著提高了成本效益。