Quantization is a key method for reducing the GPU memory requirement of training large language models (LLMs). Yet, current approaches are ineffective for 4-bit activations and 8-bit gradients, which would easily cause slow convergence or accuracy loss. To address this, we introduce AGoQ, incorporating two new techniques: 1) a layer-aware activation quantization algorithm that allocates appropriate bit-widths for activations of various layers based on their types and pipeline stages to achieve near 4-bit activation storage, and 2) a gradient quantization algorithm that reduces memory usage and shortens communication time by employing 8-bit gradient storage and precision-preserving 8-bit All-Reduce communication. We conduct extensive experiments using different sizes of LLMs on two GPU clusters (up to 64 GPUs), and the experimental results show that our AGoQ reduces the memory by up to 52\% and achieves up to 1.34$\times$ improvement of training speed compared to state-of-the-art training systems Megatron-LM (w/ or w/o ZeRO), COAT and DeepSpeed with 8B to 32B LLaMA models, while achieving convergence loss on pretraining and comparable accuracy on downstream tasks with LLaMA architectures.
翻译:量化是降低大语言模型训练GPU内存需求的关键方法。然而,当前方法对4位激活值与8位梯度的处理效果不佳,易导致收敛缓慢或精度损失。为此,我们提出AGoQ,包含两项新技术:1)基于层感知的激活量化算法,根据各层类型与流水线阶段为其分配适当位宽,实现接近4位的激活存储;2)梯度量化算法,通过采用8位梯度存储与保持精度的8位全规约通信,减少内存占用并缩短通信时间。我们在两个GPU集群(最多64块GPU)上使用不同规模的LLM进行广泛实验,结果表明,与采用8B至32B LLaMA模型的先进训练系统Megatron-LM(含/不含ZeRO)、COAT和DeepSpeed相比,AGoQ在预训练任务中实现收敛损失,在下游任务中获得相当精度,同时内存占用最高降低52%,训练速度提升最高达1.34倍。