Sparsely-activated Mixture-of-Experts (MoE) architecture has increasingly been adopted to further scale large language models (LLMs) due to its sub-linear scaling for computation costs. However, frequent failures still pose significant challenges as training scales. The cost of even a single failure is significant, as all GPUs need to wait idle until the failure is resolved, potentially losing considerable training progress as training has to restart from checkpoints. Existing solutions for efficient fault-tolerant training either lack elasticity or rely on building resiliency into pipeline parallelism, which cannot be applied to MoE models due to the expert parallelism strategy adopted by the MoE architecture. We present Lazarus, a system for resilient and elastic training of MoE models. Lazarus adaptively allocates expert replicas to address the inherent imbalance in expert workload and speeds-up training, while a provably optimal expert placement algorithm is developed to maximize the probability of recovery upon failures. Through adaptive expert placement and a flexible token dispatcher, Lazarus can also fully utilize all available nodes after failures, leaving no GPU idle. Our evaluation shows that Lazarus outperforms existing MoE training systems by up to 5.7x under frequent node failures and 3.4x on a real spot instance trace.
翻译:稀疏激活的混合专家(MoE)架构因其计算成本呈次线性扩展的特性,日益被用于进一步扩展大语言模型(LLMs)的规模。然而,随着训练规模的扩大,频繁的节点故障仍然构成重大挑战。即使是单次故障的代价也十分高昂,因为所有GPU需要空闲等待故障解决,且训练可能因必须从检查点重启而损失大量进度。现有高效容错训练方案要么缺乏弹性,要么依赖于在流水线并行中构建容错能力,这些方法由于MoE架构采用的专家并行策略而无法应用于MoE模型。本文提出Lazarus,一个用于MoE模型弹性容错训练的系统。Lazarus通过自适应分配专家副本来应对专家工作负载固有的不均衡问题并加速训练,同时开发了可证明最优的专家放置算法以最大化故障恢复概率。通过自适应专家放置与灵活的令牌分发器,Lazarus还能在故障后充分利用所有可用节点,避免GPU闲置。评估结果表明,在频繁节点故障场景下,Lazarus性能超越现有MoE训练系统达5.7倍;在真实竞价实例追踪数据上达到3.4倍加速。