The Mixture of Experts (MoE) selects a few feed-forward networks (FFNs) per token, achieving an effective trade-off between computational cost and performance. In conventional MoE, each expert is treated as entirely independent, and experts are combined in a discrete space. As a result, when the number of experts increases, it becomes difficult to train each expert effectively. To stabilize training while increasing the number of experts, we propose $\infty$-MoE that selects a portion of the parameters of large FFNs based on continuous values sampled for each token. By considering experts in a continuous space, this approach allows for an infinite number of experts while maintaining computational efficiency. Experiments show that a GPT-2 Small-based $\infty$-MoE model, with 129M active and 186M total parameters, achieves comparable performance to a dense GPT-2 Medium with 350M parameters. Adjusting the number of sampled experts at inference time allows for a flexible trade-off between accuracy and speed, with an improvement of up to 2.5\% in accuracy over conventional MoE.
翻译:专家混合(MoE)为每个词元选择少数前馈网络(FFN),实现了计算成本与性能之间的有效权衡。在传统MoE中,每个专家被视为完全独立,且专家在离散空间中进行组合。因此,当专家数量增加时,难以有效训练每个专家。为在增加专家数量的同时稳定训练,我们提出$\infty$-MoE,该方法基于为每个词元采样的连续值,从大型FFN中选择部分参数。通过在连续空间中考虑专家,该方法能够在保持计算效率的同时容纳无限数量的专家。实验表明,基于GPT-2 Small的$\infty$-MoE模型(激活参数1.29亿,总参数1.86亿)达到了与拥有3.5亿参数的稠密GPT-2 Medium相当的性能。在推理时调整采样专家数量可实现准确性与速度之间的灵活权衡,其准确性较传统MoE最高可提升2.5%。