AllReduce is a technique in distributed computing which saw use in many critical applications of deep learning. Existing methods of AllReduce scheduling oftentimes lack flexibility due to being topology-specific or relying on extensive handcrafted designs that require domain-specific knowledge. In this work, we aim to alleviate this inflexibility by proposing a deep-reinforcement-learning (DRL)-based pipeline that can generate AllReduce scheduling for various network topologies without topology-specific design features. The flow scheduling module of this pipeline consists of two hierarchically-structured DRL policies that work cooperatively to find optimal scheduling. We showcase the performance of our method compared to the baseline methods on three topologies: BCube, DCell, and Jellyfish. Finally, we contributed a Python-based simulation environment simulating AllReduce scheduling on these network topologies.
翻译:AllReduce是分布式计算中的一项关键技术,已广泛应用于深度学习的诸多重要场景。现有的AllReduce调度方法常因局限于特定拓扑结构或依赖大量需要领域专业知识的手工设计而缺乏灵活性。本研究旨在通过提出一种基于深度强化学习的流程框架来缓解这种局限性,该框架能够为不同网络拓扑结构生成AllReduce调度方案,且无需针对特定拓扑进行专门设计。该流程中的流调度模块包含两个层次化结构的深度强化学习策略,它们协同工作以寻找最优调度方案。我们在BCube、DCell和Jellyfish三种典型拓扑结构上,通过对比实验展示了本方法相较于基准方案的性能优势。最后,我们开发了一个基于Python的仿真环境,用于模拟上述网络拓扑中的AllReduce调度过程。