Multi-hop all-reduce is the de facto backbone of large model training. As the training scale increases, the network often becomes a bottleneck, motivating reducing the volume of transmitted data. Accordingly, recent systems demonstrated significant acceleration of the training process using gradient quantization. However, these systems are not optimized for multi-hop aggregation, where entries are partially summed multiple times along their aggregation topology. This paper presents DynamiQ, a quantization framework that bridges the gap between quantization best practices and multi-hop aggregation. DynamiQ introduces novel techniques to better represent partial sums, co-designed with a decompress-accumulate-recompress fused kernel to facilitate fast execution. We extended PyTorch DDP to support DynamiQ over NCCL P2P, and across different LLMs, tasks, and scales, we demonstrate consistent improvement of up to 34.2% over the best among state-of-the-art methods such as Omni-Reduce, THC, and emerging standards such as MXFP4, MXFP6, and MXFP8. Further, DynamiQ is the only evaluated method that consistently reaches near-baseline accuracy (e.g., 99.9% of the BF16 baseline) and does so while significantly accelerating the training.
翻译:多跳全规约是大模型训练的事实性骨干。随着训练规模的扩大,网络常常成为瓶颈,这促使人们减少传输数据量。相应地,近期系统通过使用梯度量化展示了训练过程的显著加速。然而,这些系统并未针对多跳聚合进行优化,在多跳聚合中,数据条目会沿着其聚合拓扑被部分求和多次。本文提出了DynamiQ,一个弥合量化最佳实践与多跳聚合之间差距的量化框架。DynamiQ引入了新颖的技术以更好地表示部分和,并与解压缩-累加-再压缩融合内核协同设计,以促进快速执行。我们扩展了PyTorch DDP以支持基于NCCL P2P的DynamiQ,并在不同的LLM、任务和规模上,展示了相对于Omni-Reduce、THC等最先进方法以及MXFP4、MXFP6、MXFP8等新兴标准中的最佳方法,高达34.2%的持续改进。此外,DynamiQ是唯一一个在显著加速训练的同时,能够持续达到接近基线精度(例如,达到BF16基线的99.9%)的评估方法。