Scaling large language models (LLMs) demands extensive data and computing resources, which are traditionally constrained to data centers by the high-bandwidth requirements of distributed training. Low-bandwidth methods like federated learning (FL) could enable collaborative training of larger models across weakly-connected GPUs if they can effectively be used for pre-training. To achieve this, we introduce Photon, the first complete system for federated end-to-end LLM training, leveraging cross-silo FL for global-scale training with minimal communication overheads. Using Photon, we train the first federated family of decoder-only LLMs from scratch. We show that: (1) Photon can train model sizes up to 7B in a federated fashion while reaching an even better perplexity than centralized pre-training; (2) Photon model training time decreases with available compute, achieving a similar compute-time trade-off to centralized; and (3) Photon outperforms the wall-time of baseline distributed training methods by 35% via communicating 64x-512xless. Our proposal is robust to data heterogeneity and converges twice as fast as previous methods like DiLoCo. This surprising data efficiency stems from a unique approach combining small client batch sizes with extremely high learning rates, enabled by federated averaging's robustness to hyperparameters. Photon thus represents the first economical system for global internet-wide LLM pre-training.
翻译:扩展大语言模型(LLM)需要大量的数据和计算资源,而分布式训练的高带宽需求传统上将其限制在数据中心内。联邦学习(FL)等低带宽方法若能有效用于预训练,则可在弱连接的GPU集群上实现更大模型的协作训练。为此,我们提出了Photon——首个用于联邦端到端LLM训练的完整系统,该系统利用跨孤岛联邦学习实现全球规模训练,同时将通信开销降至最低。基于Photon,我们首次从头训练了完全通过联邦学习构建的解码器架构LLM系列。实验表明:(1)Photon能以联邦方式训练参数量高达70亿的模型,其困惑度甚至优于集中式预训练结果;(2)Photon的训练时间随可用算力增加而减少,其计算时间权衡曲线与集中式训练相近;(3)通过减少64至512倍的通信量,Photon比基线分布式训练方法的实际耗时缩短35%。该方案对数据异构性具有鲁棒性,且收敛速度比DiLoCo等现有方法快两倍。这种惊人的数据效率源于独特的技术路径:结合小客户端批量大小与极高学习率,其可行性得益于联邦平均算法对超参数的强鲁棒性。因此,Photon成为首个支持全球互联网级LLM预训练的经济型系统。