The scaling of large language models (LLMs) has revolutionized their capabilities in various tasks, yet this growth must be matched with efficient computational strategies. The Mixture-of-Experts (MoE) architecture stands out for its ability to scale model size without significantly increasing training costs. Despite their advantages, current MoE models often display parameter inefficiency. For instance, a pre-trained MoE-based LLM with 52 billion parameters might perform comparably to a standard model with 6.7 billion parameters. Being a crucial part of MoE, current routers in different layers independently assign tokens without leveraging historical routing information, potentially leading to suboptimal token-expert combinations and the parameter inefficiency problem. To alleviate this issue, we introduce the Layerwise Recurrent Router for Mixture-of-Experts (RMoE). RMoE leverages a Gated Recurrent Unit (GRU) to establish dependencies between routing decisions across consecutive layers. Such layerwise recurrence can be efficiently parallelly computed for input tokens and introduces negotiable costs. Our extensive empirical evaluations demonstrate that RMoE-based language models consistently outperform a spectrum of baseline models. Furthermore, RMoE integrates a novel computation stage orthogonal to existing methods, allowing seamless compatibility with other MoE architectures. Our analyses attribute RMoE's gains to its effective cross-layer information sharing, which also improves expert selection and diversity. Our code is at https://github.com/qiuzh20/RMoE
翻译:大型语言模型(LLM)的规模化扩展已彻底改变了其在各类任务中的能力,但这种增长必须与高效的计算策略相匹配。混合专家(MoE)架构因其能够在不显著增加训练成本的情况下扩展模型规模而脱颖而出。尽管具有优势,当前的MoE模型常表现出参数效率低下的问题。例如,一个拥有520亿参数的预训练MoE基LLM,其性能可能与仅含67亿参数的标准模型相当。作为MoE的关键组成部分,当前各层中的路由器在分配词元时相互独立,未能利用历史路由信息,这可能导致次优的词元-专家组合及参数效率低下问题。为缓解此问题,我们提出了用于混合专家模型的层间循环路由器(RMoE)。RMoE利用门控循环单元(GRU)在连续层的路由决策间建立依赖关系。这种层间循环机制可对输入词元进行高效并行计算,且引入的成本可控。我们的大量实证评估表明,基于RMoE的语言模型在各项基准测试中持续优于多种基线模型。此外,RMoE引入了一种与现有方法正交的新型计算阶段,使其能够与其他MoE架构无缝兼容。我们的分析将RMoE的性能提升归因于其有效的跨层信息共享机制,该机制同时提升了专家选择的准确性与多样性。代码开源地址:https://github.com/qiuzh20/RMoE