Pretraining large language models (LLMs) typically requires centralized clusters with thousands of high-memory GPUs (e.g., H100/A100). Recent decentralized training methods reduce communication overhead by employing federated optimization; however, they still need to train the entire model on each node, remaining constrained by GPU memory limitations. In this work, we propose SParse Expert Synchronization (SPES), a memory-efficient decentralized framework for pretraining mixture-of-experts (MoE) LLMs. SPES trains only a subset of experts per node, substantially lowering the memory footprint. Each node updates its local experts and periodically synchronizes with other nodes, eliminating full-parameter transmission while ensuring efficient knowledge sharing. To accelerate convergence, we introduce an expert-merging warm-up strategy, where experts exchange knowledge early in training, to rapidly establish foundational capabilities. With SPES, we train a 2B-parameter MoE LLM using 16 standalone 48GB GPUs over internet connections, which achieves competitive performance with centrally trained LLMs under similar computational budgets. We further demonstrate scalability by training a 7B model from scratch and a 9B model upcycled from a dense checkpoint, both of which match prior centralized baselines. Our code is available at https://github.com/zjr2000/SPES.
翻译:预训练大型语言模型通常需要配备数千块高内存GPU的集中式集群。近期的分散式训练方法通过采用联邦优化降低了通信开销,但仍需在每个节点上训练完整模型,受限于GPU内存容量。本研究提出稀疏专家同步框架,这是一种面向混合专家架构语言模型预训练的内存高效分散式范式。该框架在每个节点上仅训练专家子集,显著降低了内存占用。各节点更新本地专家参数并定期与其他节点同步,在避免全参数传输的同时实现高效知识共享。为加速收敛,我们引入专家融合预热策略,通过在训练早期阶段进行专家知识交换,快速建立基础能力。基于该框架,我们使用16块独立48GB显存的GPU通过互联网连接训练了20亿参数的混合专家语言模型,在同等计算资源下获得了与集中式训练模型相当的性能。我们进一步展示了框架的可扩展性:从头训练的70亿参数模型与从稠密检查点升级的90亿参数模型,均达到了现有集中式基线的性能水平。代码已开源。