Reinforcement Learning from Human Feedback (RLHF) has emerged as a prominent paradigm for training large language models and multimodal systems. Despite notable advances enabled by existing RLHF training frameworks, significant challenges remain in scaling to complex multimodal workflows and adapting to dynamic workloads. In particular, current systems often encounter limitations related to controller scalability when managing large models, as well as inefficiencies in orchestrating intricate RLHF pipelines, especially in scenarios that require dynamic sampling and resource allocation. In this paper, we introduce WeChat-YATT (Yet Another Transformer Trainer in WeChat), a simple, scalable, and balanced RLHF training framework specifically designed to address these challenges. WeChat-YATT features a parallel controller programming model that enables flexible and efficient orchestration of complex RLHF workflows, effectively mitigating the bottlenecks associated with centralized controller architectures and facilitating scalability in large-scale data scenarios. In addition, we propose a dynamic placement schema that adaptively partitions computational resources and schedules workloads, thereby significantly reducing hardware idle time and improving GPU utilization under variable training conditions. We evaluate WeChat-YATT across a range of experimental scenarios, demonstrating that it achieves substantial improvements in throughput compared to state-of-the-art RLHF training frameworks. Furthermore, WeChat-YATT has been successfully deployed to train models supporting WeChat product features for a large-scale user base, underscoring its effectiveness and robustness in real-world applications.We have open-source WeChat-YATT at https://www.github.com/tencent/WeChat-YATT.
翻译:基于人类反馈的强化学习(RLHF)已成为训练大语言模型和多模态系统的重要范式。尽管现有RLHF训练框架取得了显著进展,但在扩展到复杂多模态工作流及适应动态工作负载方面仍面临重大挑战。具体而言,当前系统在管理大型模型时常面临控制器可扩展性限制,且在编排复杂RLHF流水线时效率不足,特别是在需要动态采样与资源分配的场景中。本文提出WeChat-YATT(微信平台中的另一种Transformer训练器),这是一个为应对上述挑战而专门设计的简单、可扩展且均衡的RLHF训练框架。WeChat-YATT采用并行控制器编程模型,能够灵活高效地编排复杂RLHF工作流,有效缓解集中式控制器架构带来的瓶颈问题,并促进大规模数据场景下的可扩展性。此外,我们提出一种动态放置策略,能够自适应地划分计算资源并调度工作负载,从而显著减少硬件空闲时间,提升可变训练条件下的GPU利用率。我们在多组实验场景中对WeChat-YATT进行评估,结果表明相较于最先进的RLHF训练框架,其吞吐量实现了显著提升。此外,WeChat-YATT已成功部署用于训练支持微信产品特性的模型,服务大规模用户群体,这印证了其在真实应用场景中的有效性与鲁棒性。我们已在https://www.github.com/tencent/WeChat-YATT开源WeChat-YATT。