Reinforcement Learning from Human Feedback (RLHF) has emerged as a prominent paradigm for training large language models and multimodal systems. Despite notable advances enabled by existing RLHF training frameworks, significant challenges remain in scaling to complex multimodal workflows and adapting to dynamic workloads. In particular, current systems often encounter limitations related to controller scalability when managing large models, as well as inefficiencies in orchestrating intricate RLHF pipelines, especially in scenarios that require dynamic sampling and resource allocation. In this paper, we introduce WeChat-YATT (Yet Another Transformer Trainer in WeChat), a simple, scalable, and balanced RLHF training framework specifically designed to address these challenges. WeChat-YATT features a parallel controller programming model that enables flexible and efficient orchestration of complex RLHF workflows, effectively mitigating the bottlenecks associated with centralized controller architectures and facilitating scalability in large-scale data scenarios. In addition, we propose a dynamic placement schema that adaptively partitions computational resources and schedules workloads, thereby significantly reducing hardware idle time and improving GPU utilization under variable training conditions. We evaluate WeChat-YATT across a range of experimental scenarios, demonstrating that it achieves substantial improvements in throughput compared to state-of-the-art RLHF training frameworks. Furthermore, WeChat-YATT has been successfully deployed to train models supporting WeChat product features for a large-scale user base, underscoring its effectiveness and robustness in real-world applications.
翻译:基于人类反馈的强化学习(RLHF)已成为训练大语言模型和多模态系统的重要范式。尽管现有的RLHF训练框架取得了显著进展,但在扩展到复杂多模态工作流和适应动态工作负载方面仍面临重大挑战。具体而言,当前系统在管理大模型时常常遇到控制器可扩展性的限制,以及在编排复杂RLHF流水线时效率低下,尤其是在需要动态采样和资源分配的场景中。本文介绍WeChat-YATT(WeChat中的另一种Transformer训练器),这是一个简单、可扩展且均衡的RLHF训练框架,专门为解决这些挑战而设计。WeChat-YATT采用并行控制器编程模型,能够灵活高效地编排复杂的RLHF工作流,有效缓解了集中式控制器架构带来的瓶颈,并促进了大规模数据场景下的可扩展性。此外,我们提出了一种动态放置方案,能够自适应地划分计算资源并调度工作负载,从而在变化的训练条件下显著减少硬件空闲时间并提高GPU利用率。我们在多种实验场景下对WeChat-YATT进行了评估,结果表明,相较于最先进的RLHF训练框架,它在吞吐量上实现了显著提升。此外,WeChat-YATT已成功部署用于训练支持微信产品功能的模型,服务于大规模用户群体,这证明了其在真实应用中的有效性和鲁棒性。