Mixture-of-Experts (MoE) architectures have emerged as a promising approach to scale Large Language Models (LLMs). MoE boosts the efficiency by activating a subset of experts per token. Recent works show that fine-grained experts substantially enriches the combinatorial flexibility of active experts and enhances model expressiveness. However, such a design is fundamentally limited by the layer-local routing mechanism: each layer is restricted to its own expert pool. This requires a careful trade-off between expert dimensionality and routing diversity given fixed parameter budgets. We describe ReXMoE, a novel MoE architecture that improves routing beyond the existing layer-local approaches by allowing routers to reuse experts across adjacent layers. ReXMoE decouples expert dimensionality from per-layer budgets, enabling richer expert combinations without sacrificing individual expert capacity or inflating overall parameters. To this end, we propose a new progressive scaling routing (PSR) strategy to gradually increase the candidate expert pool during training. As a result, ReXMoE improves both language modeling and downstream task performance. Extensive experiments on models ranging from 0.5B to 7B parameters across different architectures demonstrate that ReXMoE consistently improves performance under fixed architectural dimensions, confirming ReXMoE as new design paradigm for parameter-efficient and scalable MoE-based LLMs.
翻译:专家混合(MoE)架构已成为扩展大语言模型(LLMs)的一种前景广阔的方法。MoE通过为每个令牌激活专家子集来提升效率。近期研究表明,细粒度专家能显著增强活跃专家的组合灵活性并提升模型表达能力。然而,这种设计受到层局部路由机制的根本限制:每一层仅能访问其专属的专家池。在给定固定参数预算的情况下,这需要在专家维度与路由多样性之间进行谨慎权衡。本文提出ReXMoE——一种创新的MoE架构,通过允许路由器跨相邻层复用专家,实现了对现有层局部路由方法的超越。ReXMoE将专家维度与单层参数预算解耦,从而在不牺牲单个专家容量或增加总体参数量的前提下,实现更丰富的专家组合。为此,我们提出渐进式扩展路由(PSR)策略,在训练过程中逐步扩大候选专家池。实验表明,ReXMoE在语言建模和下游任务性能上均取得显著提升。通过在0.5B至7B参数规模的不同架构模型上进行大量实验,我们验证了ReXMoE在固定架构维度下能持续提升性能,这确立了ReXMoE作为参数高效、可扩展的MoE基LLMs设计新范式的地位。