Training extremely large language models with billions of parameters is a computationally intensive task that pushes the limits of current data parallel training systems. While techniques like ZeRO++ have enabled efficient distributed training of such giant models on inexpensive low-bandwidth clusters, they can suffer from convergence issues due to potential race conditions in the hierarchical partitioning (hpZ) scheme employed to reduce cross-machine communication. In this work, we first show how these race conditions cause instability when training models with billions of parameters. We then propose a modification to the partitioning algorithm that addresses these convergence challenges while maintaining competitive training efficiency. Empirical evaluation on training the multi-billion parameters Falcon Models and Llama-2 models demonstrates the updated algorithm's ability to achieve reliable convergence on these massive models, where stock ZeRO++ hpZ fails to converge. The updated algorithm enables robust training of larger models with 98\% throughput and model training speed improvement without sacrificing the quality of convergence.
翻译:训练具有数十亿参数的极大规模语言模型是一项计算密集型任务,对当前数据并行训练系统提出了极限挑战。尽管ZeRO++等技术已能在廉价的低带宽集群上高效分布式训练此类巨型模型,但由于其用于减少跨机器通信的分层分区(hpZ)方案存在潜在的竞态条件,可能导致收敛问题。本文首先展示了这些竞态条件如何在训练数十亿参数模型时引发不稳定性。随后,我们提出一种分区算法的改进方案,该方案在保持竞争力的训练效率的同时,解决了这些收敛挑战。通过对具有数十亿参数的Falcon Models和Llama-2模型进行实证评估,改进后的算法在这些巨型模型上实现了可靠收敛,而标准ZeRO++ hpZ方案则无法收敛。该改进算法能够以98%的吞吐量和模型训练速度提升,在不牺牲收敛质量的前提下实现更大模型的鲁棒训练。