In this work, we present WLB-LLM, a workLoad-balanced 4D parallelism for large language model training. We first thoroughly analyze the workload imbalance issue in LLM training and identify two primary sources of imbalance at the pipeline parallelism and context parallelism levels. Then, to address the imbalance issue, at the pipeline parallelism level, WLB-LLM incorporates a workload-aware variable-length document packing method to balance the computation and communication workload across micro-batches. Additionally, at the context parallelism level, WLB-LLM introduces a novel fine-grained per-document sharding strategy, ensuring each worker within a context parallelism group has an identical workload. Comprehensive experiments under different model scales demonstrate that WLB-LLM significantly mitigates the workload imbalance during 4D parallelism LLM training and achieves an average speedup of 1.23x when applying WLB-LLM in our internal LLM training framework.
翻译:本文提出WLB-LLM,一种面向大语言模型训练的工作负载均衡四维并行方法。我们首先深入分析了大语言模型训练中的工作负载不均衡问题,识别出在流水线并行和上下文并行两个层面存在的主要不均衡来源。随后,为应对不均衡问题,在流水线并行层面,WLB-LLM引入了一种感知工作负载的变长文档打包方法,以平衡微批次间的计算与通信负载。此外,在上下文并行层面,WLB-LLM提出了一种新颖的细粒度按文档分片策略,确保上下文并行组内的每个工作节点具有完全相同的工作负载。在不同模型规模下的综合实验表明,WLB-LLM能显著缓解四维并行大语言模型训练过程中的工作负载不均衡问题,当将其应用于我们内部的大语言模型训练框架时,平均实现了1.23倍的加速比。