We present MegaScale-MoE, a production system tailored for the efficient training of large-scale mixture-of-experts (MoE) models. MoE emerges as a promising architecture to scale large language models (LLMs) to unprecedented sizes, thereby enhancing model performance. However, existing MoE training systems experience a degradation in training efficiency, exacerbated by the escalating scale of MoE models and the continuous evolution of hardware. Recognizing the pivotal role of efficient communication in enhancing MoE training, MegaScale-MoE customizes communication-efficient parallelism strategies for attention and FFNs in each MoE layer and adopts a holistic approach to overlap communication with computation at both inter- and intra-operator levels. Additionally, MegaScale-MoE applies communication compression with adjusted communication patterns to lower precision, further improving training efficiency. When training a 352B MoE model on 1,440 NVIDIA Hopper GPUs, MegaScale-MoE achieves a training throughput of 1.41M tokens/s, improving the efficiency by 1.88$\times$ compared to Megatron-LM. We share our operational experience in accelerating MoE training and hope that by offering our insights in system design, this work will motivate future research in MoE systems.
翻译:本文介绍了MegaScale-MoE,一个专为高效训练大规模混合专家(MoE)模型而设计的生产系统。MoE作为一种有前景的架构,能够将大语言模型(LLMs)扩展到前所未有的规模,从而提升模型性能。然而,现有MoE训练系统的训练效率会随着MoE模型规模的不断扩大和硬件的持续演进而下降。认识到高效通信在提升MoE训练中的关键作用,MegaScale-MoE为每个MoE层中的注意力机制和前馈网络定制了通信高效的并行策略,并采用整体方法在算子间和算子内两个层面实现通信与计算的重叠。此外,MegaScale-MoE通过调整通信模式应用通信压缩至低精度,进一步提升了训练效率。在1,440个NVIDIA Hopper GPU上训练一个352B参数的MoE模型时,MegaScale-MoE实现了1.41M tokens/s的训练吞吐量,相比Megatron-LM将效率提升了1.88倍。我们分享了在加速MoE训练方面的实践经验,并希望通过提供我们在系统设计上的见解,推动未来MoE系统领域的研究。