Sparsely-activated Mixture-of-Expert (MoE) layers have found practical applications in enlarging the model size of large-scale foundation models, with only a sub-linear increase in computation demands. Despite the wide adoption of hybrid parallel paradigms like model parallelism, expert parallelism, and expert-sharding parallelism (i.e., MP+EP+ESP) to support MoE model training on GPU clusters, the training efficiency is hindered by communication costs introduced by these parallel paradigms. To address this limitation, we propose Parm, a system that accelerates MP+EP+ESP training by designing two dedicated schedules for placing communication tasks. The proposed schedules eliminate redundant computations and communications and enable overlaps between intra-node and inter-node communications, ultimately reducing the overall training time. As the two schedules are not mutually exclusive, we provide comprehensive theoretical analyses and derive an automatic and accurate solution to determine which schedule should be applied in different scenarios. Experimental results on an 8-GPU server and a 32-GPU cluster demonstrate that Parm outperforms the state-of-the-art MoE training system, DeepSpeed-MoE, achieving 1.13$\times$ to 5.77$\times$ speedup on 1296 manually configured MoE layers and approximately 3$\times$ improvement on two real-world MoE models based on BERT and GPT-2.
翻译:稀疏激活的专家混合层已在扩大大规模基础模型参数量方面得到实际应用,且仅带来计算需求的次线性增长。尽管已广泛采用模型并行、专家并行和专家分片并行等混合并行范式来支持GPU集群上的MoE模型训练,但这些并行范式引入的通信开销阻碍了训练效率。为克服此限制,我们提出了Parm系统,该系统通过设计两种专用调度来安排通信任务,从而加速MP+EP+ESP训练。所提出的调度方案消除了冗余计算与通信,并实现了节点内与节点间通信的重叠,最终减少了总体训练时间。由于两种调度并非互斥,我们提供了全面的理论分析,并推导出一种自动且精确的解决方案,以确定在不同场景下应应用何种调度。在8-GPU服务器和32-GPU集群上的实验结果表明,Parm优于当前最先进的MoE训练系统DeepSpeed-MoE,在1296个人工配置的MoE层上实现了1.13倍至5.77倍的加速,并在基于BERT和GPT-2的两个真实MoE模型上取得了约3倍的性能提升。