Modern Mixture-of-Experts (MoE) language models are designed based on total parameters (memory footprint) and active parameters (inference cost). However, we find these two factors alone are insufficient to describe an optimal architecture. Through a systematic study, we demonstrate that MoE performance is primarily determined by total parameters ($N_{total}$) and expert sparsity ($s:=n_{exp}/n_{topk}$). Moreover, $n_{exp}$ and $n_{topk}$ do not "cancel out" within the sparsity ratio; instead, a larger total number of experts slightly penalizes performance by forcing a reduction in core model dimensions (depth and width) to meet memory constraints. This motivates a simple principle for MoE design which maximizes $N_{total}$ while minimizing $s$ (maximizing $n_{topk}$) and $n_{exp}$ under the given constraints. Our findings provide a robust framework for resolving architectural ambiguity and guiding MoE design.
翻译:现代专家混合(MoE)语言模型的设计主要基于总参数量(内存占用)和激活参数量(推理成本)。然而,我们发现仅凭这两个因素不足以描述最优架构。通过一项系统性研究,我们证明MoE的性能主要由总参数量($N_{total}$)和专家稀疏度($s:=n_{exp}/n_{topk}$)决定。此外,$n_{exp}$和$n_{topk}$并不会在稀疏度比值中相互"抵消";相反,在满足内存约束的条件下,更大的专家总数会迫使核心模型维度(深度与宽度)缩减,从而轻微损害性能。这启发了一个简单的MoE设计原则:在给定约束下,最大化$N_{total}$,同时最小化$s$(即最大化$n_{topk}$)和$n_{exp}$。我们的研究结果为解决架构模糊性及指导MoE设计提供了一个稳健的框架。