Large-scale training systems typically use synchronous training, requiring all GPUs to be healthy simultaneously. In our experience training on O(100K) GPUs, synchronous training results in a low efficiency due to frequent failures and long recovery time. To address this problem, we propose a novel training paradigm, Fault Tolerant Hybrid-Shared Data Parallelism (FT-HSDP). FT-HSDP uses data parallel replicas as units of fault tolerance. When failures occur, only a single data-parallel replica containing the failed GPU or server is taken offline and restarted, while the other replicas continue training. To realize this idea at scale, FT-HSDP incorporates several techniques: 1) We introduce a Fault Tolerant All Reduce (FTAR) protocol for gradient exchange across data parallel replicas. FTAR relies on the CPU to drive the complex control logic for tasks like adding or removing participants dynamically, and relies on GPU to perform data transfer for best performance. 2) We introduce a non-blocking catch-up protocol, allowing a recovering replica to join training with minimal stall. Compared with fully synchronous training at O(100K) GPUs, FT-HSDP can reduce the stall time due to failure recovery from 10 minutes to 3 minutes, increasing effective training time from 44\% to 80\%. We further demonstrate that FT-HSDP's asynchronous recovery does not bring any meaning degradation to the accuracy of the result model.
翻译:大规模训练系统通常采用同步训练,要求所有GPU同时处于健康状态。根据我们在十万级GPU规模上的训练经验,同步训练会因频繁故障和较长的恢复时间而导致训练效率低下。为解决此问题,我们提出了一种新颖的训练范式——容错混合共享数据并行(FT-HSDP)。FT-HSDP以数据并行副本作为容错单元。当故障发生时,仅将包含故障GPU或服务器的单个数据并行副本离线并重启,而其他副本则继续训练。为实现这一思想的大规模应用,FT-HSDP融合了多项技术:1)我们提出了一种用于跨数据并行副本梯度交换的容错全归约(FTAR)协议。FTAR依赖CPU驱动动态增减参与者等复杂控制逻辑,并依赖GPU执行数据传输以获得最佳性能。2)我们引入了一种非阻塞追赶协议,允许恢复中的副本以最小延迟加入训练。与在十万级GPU上完全同步训练相比,FT-HSDP可将故障恢复导致的停滞时间从10分钟缩短至3分钟,将有效训练时间从44%提升至80%。我们进一步证明,FT-HSDP的异步恢复不会对最终模型的精度产生任何有意义的损害。