Distributed learning is commonly used for training deep learning models, especially large models. In distributed learning, manual parallelism (MP) methods demand considerable human effort and have limited flexibility. Hence, automatic parallelism (AP) methods have recently been proposed for automating the parallel strategy optimization process. Existing AP methods suffer from sub-optimal solutions because they do not jointly optimize the two categories of parallel strategies (i.e., inter-layer parallelism and intra-layer parallelism). In this paper, we propose a novel AP method called UniAP, which unifies inter- and intra-layer automatic parallelism by mixed integer quadratic programming. To the best of our knowledge, UniAP is the first parallel method that can jointly optimize the two categories of parallel strategies to find an optimal solution. Experimental results show that UniAP outperforms state-of-the-art methods by up to 1.71$\times$ in throughput and reduces strategy optimization time by up to 107$\times$ across five Transformer-based models.
翻译:摘要:分布式学习通常用于训练深度学习模型,尤其是大型模型。在分布式学习中,手动并行(MP)方法需要大量人工投入且灵活性有限。因此,近年来提出了自动并行(AP)方法以自动化并行策略优化过程。现有AP方法因未能联合优化两类并行策略(即层间并行和层内并行)而存在次优解问题。本文提出了一种名为UniAP的新型AP方法,通过混合整数二次规划统一了层间与层内自动并行。据我们所知,UniAP是首个能够联合优化这两类并行策略以寻求最优解的并行方法。实验结果表明,在五种基于Transformer的模型中,UniAP的吞吐量比现有最优方法最高提升1.71倍,策略优化时间最高减少107倍。