Fully Sharded Data Parallel (FSDP), also known as ZeRO, is widely used for training large-scale models, featuring its flexibility and minimal intrusion on model code. However, current FSDP systems struggle with structure-aware training methods (e.g., block-wise quantized training) and with non-element-wise optimizers (e.g., Shampoo and Muon) used in cutting-edge models (e.g., Gemini, Kimi K2). FSDP's fixed element- or row-wise sharding formats conflict with the block-structured computations. In addition, today's implementations fall short in communication and memory efficiency, limiting scaling to tens of thousands of GPUs. We introduce veScale-FSDP, a redesigned FSDP system that couples a flexible sharding format, RaggedShard, with a structure-aware planning algorithm to deliver both flexibility and performance at scale. veScale-FSDP natively supports efficient data placement required by FSDP, empowering block-wise quantization and non-element-wise optimizers. As a result, veScale-FSDP achieves 5~66% higher throughput and 16~30% lower memory usage than existing FSDP systems, while scaling efficiently to tens of thousands of GPUs.
翻译:完全分片数据并行(Fully Sharded Data Parallel,FSDP),亦称ZeRO,因其灵活性高且对模型代码侵入性小,被广泛用于大规模模型训练。然而,当前的FSDP系统难以支持结构感知的训练方法(例如,块级量化训练)以及前沿模型(如Gemini、Kimi K2)中使用的非逐元素优化器(如Shampoo和Muon)。FSDP固定的元素级或行级分片格式与块结构计算存在冲突。此外,现有实现在通信和内存效率方面存在不足,限制了其向数万GPU规模的扩展。本文介绍veScale-FSDP,一个重新设计的FSDP系统,它将灵活的分片格式RaggedShard与结构感知的规划算法相结合,从而在大规模下同时实现灵活性与高性能。veScale-FSDP原生支持FSDP所需的高效数据布局,从而赋能块级量化与非逐元素优化器。因此,与现有FSDP系统相比,veScale-FSDP实现了5%~66%的吞吐量提升和16%~30%的内存使用降低,并能高效扩展至数万GPU规模。