This paper presents a comprehensive review of the Mixture-of-Experts (MoE) architecture in large language models, highlighting its ability to significantly enhance model performance while maintaining minimal computational overhead. Through a systematic analysis spanning theoretical foundations, core architectural designs, and large language model (LLM) applications, we examine expert gating and routing mechanisms, hierarchical and sparse MoE configurations, meta-learning approaches, multimodal and multitask learning scenarios, real-world deployment cases, and recent advances and challenges in deep learning. Our analysis identifies key advantages of MoE, including superior model capacity compared to equivalent Bayesian approaches, improved task-specific performance, and the ability to scale model capacity efficiently. We also underscore the importance of ensuring expert diversity, accurate calibration, and reliable inference aggregation, as these are essential for maximizing the effectiveness of MoE architectures. Finally, this review outlines current research limitations, open challenges, and promising future directions, providing a foundation for continued innovation in MoE architecture and its applications.
翻译:本文对大规模语言模型中的专家混合(MoE)架构进行了全面综述,重点阐述了其在保持最小计算开销的同时显著提升模型性能的能力。通过系统分析理论基础、核心架构设计及大语言模型(LLM)应用,我们深入探讨了专家门控与路由机制、分层与稀疏MoE配置、元学习方法、多模态与多任务学习场景、实际部署案例以及深度学习领域的最新进展与挑战。本分析明确了MoE架构的关键优势,包括相较于等效贝叶斯方法具有更优越的模型容量、提升的特定任务性能以及高效扩展模型容量的能力。同时,我们强调确保专家多样性、精确校准和可靠推理聚合的重要性,这些要素对于最大化MoE架构效能至关重要。最后,本综述阐述了当前研究局限、开放挑战及未来发展方向,为MoE架构及其应用的持续创新奠定基础。