Expert parallelism has been introduced as a strategy to distribute the computational workload of sparsely-gated mixture-of-experts (MoE) models across multiple computing devices, facilitating the execution of these increasingly large-scale models. However, the All-to-All communication intrinsic to expert parallelism constitutes a significant overhead, diminishing the MoE models' efficiency. Current optimization approaches offer some relief, yet they are constrained by the sequential interdependence of communication and computation operations. To address this limitation, we present a novel shortcut-connected MoE (ScMoE) architecture with an overlapping parallel strategy, which effectively decouples communication from its conventional sequence, allowing for a substantial overlap of 70% to 100% with computation. When compared with the prevalent top-2 MoE architecture, ScMoE demonstrates training speed improvements of 30% and 11%, and inference improvements of 40% and 15%, in our distributed environments with PCIe and NVLink hardware, respectively, where communication constitutes 60% and 15% of the total MoE time consumption. Building on the ScMoE architecture, we further implement an expert offloading strategy to facilitate memory-limited inference, optimizing latency through the overlap of expert migration. Additionally, extensive experiments and theoretical analyses indicate that ScMoE not only achieves comparable but in some instances surpasses the model quality of existing approaches.
翻译:专家并行作为一种策略被引入,旨在将稀疏门控混合专家模型的计算工作负载分布到多个计算设备上,从而促进这些日益大规模模型的执行。然而,专家并行固有的全对全通信构成了显著的开销,降低了混合专家模型的效率。当前的优化方法提供了一定的缓解,但它们受到通信与计算操作之间顺序依赖关系的限制。为了解决这一局限,我们提出了一种新颖的基于快捷连接的混合专家架构及其重叠并行策略,该策略有效地将通信从其传统序列中解耦,实现了与计算高达70%至100%的重叠。与主流的top-2混合专家架构相比,在我们分别采用PCIe和NVLink硬件的分布式环境中(其中通信分别占总混合专家时间消耗的60%和15%),ScMoE分别展示了30%和11%的训练速度提升,以及40%和15%的推理速度提升。基于ScMoE架构,我们进一步实施了一种专家卸载策略,以促进内存受限的推理,并通过专家迁移的重叠来优化延迟。此外,大量的实验和理论分析表明,ScMoE不仅达到了与现有方法相当的模型质量,在某些情况下甚至超越了现有方法。