Expert parallelism has been introduced as a strategy to distribute the computational workload of sparsely-gated mixture-of-experts (MoE) models across multiple computing devices, facilitating the execution of these increasingly large-scale models. However, the All-to-All communication intrinsic to expert parallelism constitutes a significant overhead, diminishing the MoE models' efficiency. Current optimization approaches offer some relief, yet they are constrained by the sequential interdependence of communication and computation operations. To address this limitation, we present a novel shortcut-connected MoE architecture with overlapping parallel strategy, designated as ScMoE, which effectively decouples communication from its conventional sequence, allowing for a substantial overlap of 70% to 100% with computation. When compared with the prevalent top-2 MoE architecture, ScMoE demonstrates training speed improvements of 30% and 11%, and inference improvements of 40% and 15%, in our PCIe and NVLink hardware environments, respectively, where communication constitutes 60% and 15% of the total MoE time consumption. On the other hand, extensive experiments and theoretical analyses indicate that ScMoE not only achieves comparable but in some instances surpasses the model quality of existing approaches in vision and language tasks.
翻译:专家并行已被引入作为一种策略,用于将稀疏门控混合专家(MoE)模型的计算负载分布到多个计算设备上,从而促进这些日益大规模模型的执行。然而,专家并行固有的全到全通信造成了显著开销,降低了MoE模型的效率。当前优化方法提供了一定的缓解,但受限于通信和计算操作的顺序相互依赖。为解决这一局限,我们提出了一种新的具有重叠并行策略的捷径连接MoE架构,命名为ScMoE。该架构有效将通信从其常规序列中解耦,使通信与计算实现70%至100%的大幅重叠。与普遍使用的Top-2 MoE架构相比,在通信分别占MoE总时间消耗60%和15%的PCIe和NVLink硬件环境中,ScMoE的训练速度分别提升了30%和11%,推理速度分别提升了40%和15%。另一方面,大量实验和理论分析表明,在视觉和语言任务中,ScMoE不仅达到可比甚至在某些情况下超越了现有方法的模型质量。