SSD-offloaded training offers a practical and promising approach to making LLM training cost-effective. Building on gradient accumulation with micro-batches, this paper introduces GreedySnake, a new SSD-offloaded training system that employs vertical scheduling, which executes all microbatches of a layer before proceeding to the next. Compared to existing systems that use horizontal scheduling (i.e., executing micro-batches sequentially), GreedySnake achieves higher training throughput with smaller batch sizes, bringing the system much closer to the ideal scenario predicted by the roofline model. To further mitigate the I/O bottleneck, GreedySnake overlaps part of the optimization step with the forward pass of the next iteration. Experimental results on A100 GPUs show that GreedySnake achieves saturated training throughput improvements over ZeRO-Infinity: 1.96x on 1 GPU and 1.93x on 4 GPUs for GPT-65B, and 2.53x on 1 GPU for GPT-175B.
翻译:SSD卸载式训练为降低大语言模型训练成本提供了一种实用且前景广阔的方法。本文在基于微批次的梯度累积基础上,提出了GreedySnake——一种采用垂直调度策略的新型SSD卸载式训练系统。该系统在执行完某一层的所有微批次计算后,再推进至下一层。与采用水平调度(即按顺序执行微批次)的现有系统相比,GreedySnake能够在更小的批次大小下实现更高的训练吞吐量,使系统性能更接近屋顶线模型预测的理想场景。为进一步缓解I/O瓶颈,GreedySnake将部分优化器步骤与下一轮迭代的前向传播过程进行重叠。在A100 GPU上的实验结果表明,GreedySnake相比ZeRO-Infinity实现了饱和训练吞吐量提升:对于GPT-65B模型,在单GPU上达到1.96倍加速,在4 GPU上达到1.93倍加速;对于GPT-175B模型,在单GPU上达到2.53倍加速。