The increasing scale of large language models (LLMs) necessitates highly efficient collective communication frameworks, particularly as training workloads extend to hundreds of thousands of GPUs. Traditional communication methods face significant throughput and latency limitations at this scale, hindering both the development and deployment of state-of-the-art models. This paper presents the NCCLX collective communication framework, developed at Meta, engineered to optimize performance across the full LLM lifecycle, from the synchronous demands of large-scale training to the low-latency requirements of inference. The framework is designed to support complex workloads on clusters exceeding 100,000 GPUs, ensuring reliable, high-throughput, and low-latency data exchange. Empirical evaluation on the Llama4 model demonstrates substantial improvements in communication efficiency. This research contributes a robust solution for enabling the next generation of LLMs to operate at unprecedented scales.
翻译:大型语言模型(LLM)规模的不断增长,迫切需要高效的集合通信框架,尤其是在训练工作负载扩展到数十万GPU的规模时。在此规模下,传统的通信方法面临显著的吞吐量和延迟限制,阻碍了最先进模型的开发与部署。本文介绍了由Meta开发的NCCLX集合通信框架,该框架旨在优化LLM全生命周期的性能,从大规模训练所需的同步性到推理所需的低延迟。该框架设计用于支持超过10万个GPU集群上的复杂工作负载,确保可靠、高吞吐量和低延迟的数据交换。在Llama4模型上的实证评估表明,通信效率得到了显著提升。这项研究为实现下一代LLM在空前规模上运行提供了一个稳健的解决方案。