As model sizes in machine learning continue to scale, distributed training is necessary to accommodate model weights within each device and to reduce training time. However, this comes with the expense of increased communication overhead due to the exchange of gradients and activations, which become the critical bottleneck of the end-to-end training process. In this work, we motivate the design of multi-dimensional networks within machine learning systems as a cost-efficient mechanism to enhance overall network bandwidth. We also identify that optimal bandwidth allocation is pivotal for multi-dimensional networks to ensure efficient resource utilization. We introduce LIBRA, a framework specifically focused on optimizing multi-dimensional fabric architectures. Through case studies, we demonstrate the value of LIBRA, both in architecting optimized fabrics under diverse constraints and in enabling co-optimization opportunities.
翻译:随着机器学习中模型规模的持续增长,分布式训练成为将模型权重部署至各设备并缩短训练时间的必要手段。然而,梯度与激活值的交换导致通信开销增加,成为端到端训练过程的关键瓶颈。本文提出在机器学习系统中设计多维网络架构,作为一种提升整体网络带宽的性价比机制。同时,我们揭示了最优带宽分配对确保多维网络资源高效利用的关键作用。我们提出LIBRA框架,该框架专注于优化多维结构体系架构。通过案例研究,我们展示了LIBRA在多重约束下构建优化架构及实现协同优化机遇方面的价值。