The Mixtures-of-Experts (MoE) model is a widespread distributed and integrated learning method for large language models (LLM), which is favored due to its ability to sparsify and expand models efficiently. However, the performance of MoE is limited by load imbalance and high latency of All-to-All communication, along with relatively redundant computation owing to large expert capacity. Load imbalance may result from existing routing policies that consistently tend to select certain experts. The frequent inter-node communication in the All-to-All procedure also significantly prolongs the training time. To alleviate the above performance problems, we propose a novel routing strategy that combines load balance and locality by converting partial inter-node communication to that of intra-node. Notably, we elucidate that there is a minimum threshold for expert capacity, calculated through the maximal angular deviation between the gating weights of the experts and the assigned tokens. We port these modifications on the PanGu-Sigma model based on the MindSpore framework with multi-level routing and conduct experiments on Ascend clusters. The experiment results demonstrate that the proposed LocMoE reduces training time per epoch by 12.68% to 22.24% compared to classical routers, such as hash router and switch router, without impacting the model accuracy.
翻译:混合专家模型是一种广泛应用于大语言模型的分布式集成学习方法,因其能高效实现模型稀疏化与扩展而备受青睐。然而,MoE的性能受限于负载不均衡、All-to-All通信的高延迟以及因专家容量过大导致的相对冗余计算。现有路由策略持续倾向于选择特定专家,可能导致负载不均衡。All-to-All过程中频繁的节点间通信也显著延长了训练时间。为缓解上述性能问题,我们提出一种结合负载均衡与局部性的新型路由策略,将部分节点间通信转换为节点内通信。特别地,我们通过计算专家门控权重与分配词元间的最大角度偏差,推导出专家容量的最小阈值。我们在MindSpore框架上基于多级路由机制,将上述改进移植至PanGu-Sigma模型,并在昇腾集群上开展实验。实验结果表明,相较于哈希路由器和交换路由器等经典路由方案,所提出的LocMoE在保持模型精度不变的同时,每轮训练时间可减少12.68%至22.24%。