Distributed training is the de facto standard to scale up the training of deep learning models with multiple GPUs. Its performance bottleneck lies in communications for gradient synchronization. Although high tensor sparsity is widely observed, the optimal communication scheme to fully leverage sparsity is still missing. This paper aims to bridge this gap. We first analyze the characteristics of sparse tensors in popular models to understand the fundamentals of sparsity. We then systematically explore the design space of communication schemes for sparse tensors and find the optimal ones. These findings give a new understanding and inspire us to develop a holistic gradient synchronization system called Zen for sparse tensors. We demonstrate that Zen can achieve up to 5.09x speedup in communication time and up to $2.48\times$ speedup in training throughput compared to the state-of-the-art methods.
翻译:分布式训练已成为利用多GPU扩展深度学习模型训练规模的事实标准。其性能瓶颈在于梯度同步所需的通信开销。尽管高张量稀疏性被广泛观测到,但能充分利用稀疏性的最优通信方案仍然缺失。本文旨在填补这一空白。我们首先分析了主流模型中稀疏张量的特性,以理解稀疏性的基本原理。随后系统性地探索了稀疏张量通信方案的设计空间,并找到了最优方案。这些发现带来了新的理解,并启发我们开发了一个面向稀疏张量的完整梯度同步系统——Zen。实验表明,与现有最优方法相比,Zen在通信时间上可实现高达5.09倍的加速,在训练吞吐量上可实现高达$2.48\times$的加速。