We study a novel and important communication pattern in large-scale model-parallel deep learning (DL), which we call cross-mesh resharding. This pattern emerges when the two paradigms of model parallelism - intra-operator and inter-operator parallelism - are combined to support large models on large clusters. In cross-mesh resharding, a sharded tensor needs to be sent from a source device mesh to a destination device mesh, on which the tensor may be distributed with the same or different layouts. We formalize this as a many-to-many multicast communication problem, and show that existing approaches either are sub-optimal or do not generalize to different network topologies or tensor layouts, which result from different model architectures and parallelism strategies. We then propose two contributions to address cross-mesh resharding: an efficient broadcast-based communication system, and an "overlapping-friendly" pipeline schedule. On microbenchmarks, our overall system outperforms existing ones by up to 10x across various tensor and mesh layouts. On end-to-end training of two large models, GPT-3 and U-Transformer, we improve throughput by 10% and 50%, respectively.
翻译:我们研究大规模模型并行深度学习中的一种新颖且重要的通信模式,称之为跨网格重分片。当两种模型并行范式——算子内并行与算子间并行——相结合以支持大型集群上的大型模型时,该模式便会出现。在跨网格重分片中,一个分片张量需要从源设备网格发送到目标设备网格,且该张量在目标网格上可能以相同或不同的布局进行分布。我们将其形式化为一个多对多多播通信问题,并证明现有方法要么非最优,要么无法推广到不同的网络拓扑或张量布局(这些差异源于不同的模型架构与并行策略)。为此,我们提出两项贡献以应对跨网格重分片:一种高效的基于广播的通信系统,以及一种“易于重叠”的流水线调度方案。在微基准测试中,我们的整体系统在不同张量与网格布局下均优于现有方案,最高可达10倍。在两个大型模型(GPT-3与U-Transformer)的端到端训练中,我们分别实现了10%与50%的吞吐量提升。