Post-training with reinforcement learning (RL) has greatly enhanced the capabilities of large language models. Disaggregating the generation and training stages in RL into a parallel, asynchronous pipeline offers the potential for flexible scaling and improved throughput. However, it still faces two critical challenges. First, the generation stage often becomes a bottleneck due to dynamic workload shifts and severe execution imbalances. Second, the decoupled stages result in diverse and dynamic network traffic patterns that overwhelm conventional network fabrics. This paper introduces OrchestrRL, an orchestration framework that dynamically manages compute and network rhythms in disaggregated RL. To improve generation efficiency, OrchestrRL employs an adaptive compute scheduler that dynamically adjusts parallelism to match workload characteristics within and across generation steps. This accelerates execution while continuously rebalancing requests to mitigate stragglers. To address the dynamic network demands inherent in disaggregated RL -- further intensified by parallelism switching -- we co-design RFabric, a reconfigurable hybrid optical-electrical fabric. RFabric leverages optical circuit switches at selected network tiers to reconfigure the topology in real time, enabling workload-aware circuits for (i) layer-wise collective communication during training iterations, (ii) generation under different parallelism configurations, and (iii) periodic inter-cluster weight synchronization. We evaluate OrchestrRL on a physical testbed with 48 H800 GPUs, demonstrating up to a 1.40x throughput improvement. Furthermore, we develop RLSim, a high-fidelity simulator, to evaluate RFabric at scale. Our results show that RFabric achieves superior performance-cost efficiency compared to static Fat-Tree networks, establishing it as a highly effective solution for large-scale RL workloads.
翻译:强化学习(RL)后训练极大地增强了大型语言模型的能力。将RL中的生成与训练阶段解耦为并行异步流水线,为实现灵活扩展与提升吞吐量提供了潜力。然而,该方法仍面临两大关键挑战。首先,由于动态工作负载变化与严重的执行不均衡,生成阶段常成为性能瓶颈。其次,解耦后的阶段产生了多样且动态的网络流量模式,使传统网络架构不堪重负。本文提出OrchestrRL,一个用于动态管理解耦RL中计算与网络节奏的编排框架。为提升生成效率,OrchestrRL采用自适应计算调度器,动态调整并行度以匹配生成步骤内及跨步骤的工作负载特征。该方法在加速执行的同时,持续重平衡请求以缓解掉队者问题。为应对解耦RL固有的动态网络需求(并行度切换进一步加剧了该需求),我们协同设计了RFabric——一种可重构混合光电互联架构。RFabric在选定的网络层级利用光电路交换机实时重构拓扑,实现面向工作负载的电路配置,以支持:(i)训练迭代中的分层集体通信,(ii)不同并行配置下的生成任务,以及(iii)周期性的集群间权重同步。我们在配备48块H800 GPU的物理测试平台上评估OrchestrRL,实现了最高1.40倍的吞吐量提升。此外,我们开发了高保真模拟器RLSim以进行大规模RFabric评估。实验结果表明,相较于静态胖树网络,RFabric实现了更优的性能-成本效益,证明其是处理大规模RL工作负载的高效解决方案。