The increasing scale of large language models (LLMs) necessitates highly efficient collective communication frameworks, particularly as training workloads extend to hundreds of thousands of GPUs. Traditional communication methods face significant throughput and latency limitations at this scale, hindering both the development and deployment of state-of-the-art models. This paper presents the NCCLX collective communication framework, developed at Meta, engineered to optimize performance across the full LLM lifecycle, from the synchronous demands of large-scale training to the low-latency requirements of inference. The framework is designed to support complex workloads on clusters exceeding 100,000 GPUs, ensuring reliable, high-throughput, and low-latency data exchange. Empirical evaluation on the Llama4 model demonstrates substantial improvements in communication efficiency. This research contributes a robust solution for enabling the next generation of LLMs to operate at unprecedented scales.
翻译:随着大语言模型(LLM)规模的持续扩大,对高效集体通信框架的需求日益迫切,尤其是在训练任务扩展至数十万GPU的背景下。传统通信方法在此规模下面临显著的吞吐量与延迟限制,阻碍了前沿模型的开发与部署。本文提出由Meta开发的NCCLX集体通信框架,该框架旨在优化LLM全生命周期的性能——从大规模训练所需的同步通信,到推理阶段的低延迟要求。该框架专为支持超过10万个GPU集群上的复杂工作负载而设计,确保可靠、高吞吐、低延迟的数据交换。在Llama4模型上的实证评估表明,该框架显著提升了通信效率。本研究为实现下一代LLM在空前规模上的运行提供了稳健的解决方案。