The emergence of large-scale Mixture of Experts (MoE) models has marked a significant advancement in artificial intelligence, offering enhanced model capacity and computational efficiency through conditional computation. However, the deployment and inference of these models present substantial challenges in terms of computational resources, latency, and energy efficiency. This comprehensive survey systematically analyzes the current landscape of inference optimization techniques for MoE models across the entire system stack. We first establish a taxonomical framework that categorizes optimization approaches into model-level, system-level, and hardware-level optimizations. At the model level, we examine architectural innovations including efficient expert design, attention mechanisms, various compression techniques such as pruning, quantization, and knowledge distillation, as well as algorithm improvement including dynamic routing strategies and expert merging methods. At the system level, we investigate distributed computing approaches, load balancing mechanisms, and efficient scheduling algorithms that enable scalable deployment. Furthermore, we delve into hardware-specific optimizations and co-design strategies that maximize throughput and energy efficiency. This survey not only provides a structured overview of existing solutions but also identifies key challenges and promising research directions in MoE inference optimization. Our comprehensive analysis serves as a valuable resource for researchers and practitioners working on large-scale deployment of MoE models in resource-constrained environments. To facilitate ongoing updates and the sharing of cutting-edge advances in MoE inference optimization research, we have established a repository accessible at \url{https://github.com/MoE-Inf/awesome-moe-inference/}.
翻译:大规模专家混合模型的出现标志着人工智能领域的重大进展,通过条件计算提供了增强的模型容量与计算效率。然而,这些模型的部署与推理在计算资源、延迟和能效方面提出了重大挑战。本综述系统性地分析了当前专家混合模型在整个系统栈上的推理优化技术现状。我们首先建立分类框架,将优化方法划分为模型级、系统级和硬件级优化。在模型层面,我们考察了包括高效专家设计、注意力机制、剪枝/量化/知识蒸馏等多种压缩技术在内的架构创新,以及动态路由策略与专家合并方法等算法改进。在系统层面,我们研究了支持可扩展部署的分布式计算方法、负载均衡机制与高效调度算法。此外,我们深入探讨了硬件专用优化与协同设计策略,以实现吞吐量与能效的最大化。本综述不仅对现有解决方案进行了结构化梳理,同时指出了专家混合模型推理优化领域的关键挑战与前景研究方向。我们的综合分析为在资源受限环境中从事大规模专家混合模型部署的研究者与实践者提供了重要参考。为促进专家混合模型推理优化研究的持续更新与前沿进展共享,我们建立了可通过 \url{https://github.com/MoE-Inf/awesome-moe-inference/} 访问的存储库。