The Mixture of Experts (MoE) models are emerging as the latest paradigm for Large Language Models (LLMs). However, due to memory constraints, MoE models with billions or even trillions of parameters can only be deployed in multi-GPU or even multi-node & multi-GPU based serving systems. Thus, communication has became a major bottleneck in distributed serving systems, especially inter-node communication. Contemporary distributed MoE models are primarily implemented using all-reduce (AR) based tensor parallelism (TP) and all-to-all (A2A) based expert parallelism (EP). However, TP generally exhibits low inter-node efficiency and is thus confined to high-speed intra-node bandwidth. In contrast, EP tends to suffer from load imbalance, especially when the parallel degree is high. In this work, we introduce MixServe, a novel automatic distributed serving system for efficient deployment of MoE models by a novel TP-EP hybrid parallelism based on fused AR-A2A communication algorithm. MixServe begins by evaluating the communication overhead associated with various parallel strategies, taking into account the model hyperparameters and the configurations of network and hardware resources, and then automatically selects the most efficient parallel strategy. Then, we propose the TP-EP hybrid parallelism based on fused AR-A2A communication algorithm that overlaps intra-node AR communication and inter-node A2A communication. Extensive experiments on DeepSeek-R1 and Qwen3 models demonstrate that MixServe achieves superior inference performance, with 1.08~3.80x acceleration in time to first token (TTFT), 1.03~1.66x acceleration in inter-token latency (ITL), and 5.2%~50.3% throughput improvement compared to existing approaches.
翻译:混合专家(MoE)模型正成为大语言模型(LLMs)的最新范式。然而,受限于内存容量,具有数十亿甚至数万亿参数的MoE模型只能部署在多GPU甚至多节点多GPU的服务系统中。因此,通信已成为分布式服务系统的主要瓶颈,尤其是节点间通信。当前分布式MoE模型主要采用基于全归约(AR)的张量并行(TP)和基于全交换(A2A)的专家并行(EP)实现。然而,TP通常表现出较低的节点间效率,因而受限于高速节点内带宽;而EP则容易产生负载不均衡问题,尤其在并行度较高时。本文提出MixServe,一种基于融合AR-A2A通信算法的TP-EP混合并行新型自动分布式服务系统,用于高效部署MoE模型。MixServe首先评估不同并行策略的通信开销,综合考虑模型超参数及网络与硬件资源配置,自动选择最优并行策略。随后,我们提出基于融合AR-A2A通信算法的TP-EP混合并行方法,实现节点内AR通信与节点间A2A通信的重叠执行。在DeepSeek-R1和Qwen3模型上的大量实验表明,相较于现有方法,MixServe实现了显著的推理性能提升:首令牌生成时间(TTFT)加速1.08~3.80倍,令牌间延迟(ITL)加速1.03~1.66倍,吞吐量提升5.2%~50.3%。