In the era of Large Language Models (LLMs), Mixture-of-Experts (MoE) architectures offer a promising approach to managing computational costs while scaling up model parameters. Conventional MoE-based LLMs typically employ static Top-K routing, which activates a fixed and equal number of experts for each token regardless of their significance within the context. In this paper, we propose a novel Ada-K routing strategy that dynamically adjusts the number of activated experts for each token, thereby improving the balance between computational efficiency and model performance. Specifically, our strategy incorporates learnable and lightweight allocator modules that decide customized expert resource allocation tailored to the contextual needs for each token. These allocators are designed to be fully pluggable, making it broadly applicable across all mainstream MoE-based LLMs. We leverage the Proximal Policy Optimization (PPO) algorithm to facilitate an end-to-end learning process for this non-differentiable decision-making framework. Extensive evaluations on four popular baseline models demonstrate that our Ada-K routing method significantly outperforms conventional Top-K routing. Compared to Top-K, our method achieves over 25% reduction in FLOPs and more than 20% inference speedup while still improving performance across various benchmarks. Moreover, the training of Ada-K is highly efficient. Even for Mixtral-8x22B, a MoE-based LLM with more than 140B parameters, the training time is limited to 8 hours. Detailed analysis shows that harder tasks, middle layers, and content words tend to activate more experts, providing valuable insights for future adaptive MoE system designs. Both the training code and model checkpoints will be publicly available.
翻译:在大语言模型(LLM)时代,混合专家(MoE)架构为在扩展模型参数的同时管理计算成本提供了一种前景广阔的方法。传统的基于MoE的LLM通常采用静态Top-K路由,即为每个令牌激活固定且数量相同的专家,而无论其在上下文中的重要性如何。本文提出了一种新颖的Ada-K路由策略,该策略动态调整每个令牌激活的专家数量,从而改善了计算效率与模型性能之间的平衡。具体而言,我们的策略引入了可学习的轻量级分配器模块,这些模块根据每个令牌的上下文需求决定定制的专家资源分配。这些分配器被设计为完全可插拔的,使其能够广泛适用于所有主流的基于MoE的LLM。我们利用近端策略优化(PPO)算法来促进这一不可微分决策框架的端到端学习过程。在四个流行基线模型上的广泛评估表明,我们的Ada-K路由方法显著优于传统的Top-K路由。与Top-K相比,我们的方法在各项基准测试中性能仍有提升的同时,实现了超过25%的FLOPs减少和超过20%的推理加速。此外,Ada-K的训练效率极高。即使对于参数超过140B的基于MoE的LLM Mixtral-8x22B,其训练时间也限制在8小时以内。详细分析表明,更困难的任务、中间层以及内容词倾向于激活更多专家,这为未来自适应MoE系统设计提供了有价值的见解。训练代码和模型检查点将公开提供。