Distributed training methods are crucial for large language models (LLMs). However, existing distributed training methods often suffer from communication bottlenecks, stragglers, and limited elasticity. Local SGD methods have been proposed to address these issues, but their effectiveness remains limited to small-scale training due to additional memory overhead and lack of concerns on efficiency and stability. To tackle these issues, we propose EDiT, an innovative Efficient Distributed Training method that combines a tailored Local SGD approach with model sharding techniques to enhance large-scale training efficiency. EDiT performs layer-wise parameter synchronization during forward pass, reducing communication and memory overhead and enabling the overlap of computation and communication. Besides, EDiT employs a pseudo gradient penalty strategy to suppress loss spikes, which ensures training stability and improve performance. Additionally, we introduce A-EDiT, a fully asynchronous variant of EDiT that accommodates heterogeneous clusters. Building on EDiT/A-EDiT, we conduct a series of experiments to validate large-scale asynchronous training for LLMs, accompanied by comprehensive analyses. Experimental results demonstrate the superior performance of EDiT/A-EDiT, establishing them as robust solutions for distributed LLM training in diverse computational ecosystems.
翻译:分布式训练方法对于大语言模型(LLM)至关重要。然而,现有的分布式训练方法常受限于通信瓶颈、掉队者问题以及弹性不足。局部SGD方法虽被提出以应对这些问题,但由于额外的内存开销以及对效率和稳定性考量不足,其有效性仍局限于小规模训练。为解决这些问题,我们提出了EDiT,一种创新的高效分布式训练方法,它将定制的局部SGD方法与模型分片技术相结合,以提升大规模训练效率。EDiT在前向传播过程中执行逐层参数同步,从而减少通信和内存开销,并实现计算与通信的重叠。此外,EDiT采用伪梯度惩罚策略来抑制损失尖峰,这确保了训练稳定性并提升了性能。另外,我们引入了A-EDiT,它是EDiT的一个完全异步变体,能够适应异构集群。基于EDiT/A-EDiT,我们进行了一系列实验以验证LLM的大规模异步训练,并辅以全面分析。实验结果表明EDiT/A-EDiT具有优越的性能,确立了它们作为适用于多样化计算生态系统中分布式LLM训练的稳健解决方案。