In distributed deep learning, communication remains a critical bottleneck. While modern hardware advances rapidly, over 60 percent of production HPC systems still rely on legacy infrastructure (V100 GPUs, multi-plane Ethernet/InfiniBand), necessitating communication optimization without hardware upgrades. Existing approaches face three key limitations: (1) static single-rail binding underutilizes multi-rail bandwidth, (2) protocol heterogeneity (TCP-RDMA coexistence) causes synchronization delays, and (3) mainstream libraries (NCCL/MPI) lack cross-protocol coordination. We present Nezha, the first protocol-agnostic system for multi-rail networks. Our contributions include: (1) Hardware-agnostic cross-protocol coordination: A unified abstraction enabling seamless collaboration between in-network computing (SHARP), adaptive RDMA (GLEX), and TCP, achieving 1.7 to 4.3 times lower latency than Gloo. (2) Protocol-aware dynamic load balancing: A hybrid scheduling strategy with cold/hot start state machine for heterogeneous protocols, reducing startup latency for small payloads while enhancing throughput for large transfers. (3) Fault-tolerant multi-rail collaboration: A self-recovery mechanism that reroutes data flows within 200 milliseconds upon single-rail failures, ensuring uninterrupted training. Experiments on 8-node clusters demonstrate Nezha achieves 74 percent and 80 percent higher throughput than MPTCP in homogeneous (TCP-TCP) and heterogeneous (TCP-SHARP) networks, respectively. On 128-node supercomputers, Nezha delivers 2.36 times higher training efficiency than Gloo. By bridging modern DNN communication demands with legacy infrastructure, Nezha proves that systematic multi-rail optimization can unlock the potential of aging clusters.
翻译:在分布式深度学习中,通信仍然是关键瓶颈。尽管现代硬件发展迅速,但超过60%的生产型高性能计算系统仍依赖于传统基础设施(V100 GPU、多平面以太网/InfiniBand),这要求在无需硬件升级的情况下进行通信优化。现有方法面临三个主要局限:(1)静态单链路绑定未能充分利用多链路带宽;(2)协议异构性(TCP与RDMA共存)导致同步延迟;(3)主流通信库(NCCL/MPI)缺乏跨协议协调机制。本文提出哪吒——首个面向多链路网络的协议无关系统。我们的贡献包括:(1)硬件无关的跨协议协调:通过统一抽象层实现网络内计算(SHARP)、自适应RDMA(GLEX)与TCP的无缝协作,相比Gloo降低1.7至4.3倍延迟。(2)协议感知的动态负载均衡:采用冷/热启动状态机的混合调度策略,针对异构协议优化小负载启动延迟并提升大传输吞吐量。(3)容错多链路协作:具备单链路故障时200毫秒内数据流重路由的自恢复机制,保障训练连续性。在8节点集群上的实验表明,哪吒在同构(TCP-TCP)与异构(TCP-SHARP)网络中分别比MPTCP提升74%和80%的吞吐量。在128节点超级计算机上,哪吒的训练效率达到Gloo的2.36倍。通过弥合现代DNN通信需求与传统基础设施之间的鸿沟,哪吒证明了系统性多链路优化能够释放老化集群的潜在性能。