Mixture of Experts (MoE) architectures enable efficient scaling of neural networks but suffer from expert collapse, where routing converges to a few dominant experts. This reduces model capacity and causes catastrophic interference during adaptation. We propose the Spectrally-Regularized Mixture of Experts (SR-MoE), which imposes geometric constraints on the routing manifold to enforce structural modularity. Our method uses dual regularization: spectral norm constraints bound routing function Lipschitz continuity, while stable rank penalties preserve high-dimensional feature diversity in expert selection. We evaluate SR-MoE across architectural scales and dataset complexities using modular one-shot adaptation tasks. Results show that traditional linear gating fails with increasing depth (accuracy drops up to 4.72% due to expert entanglement), while SR-MoE maintains structural integrity (mean interference -0.32%). Our spectral constraints facilitate positive knowledge transfer, enabling localized expert updates without global performance decay. SR-MoE provides a general solution for building high-capacity, modular networks capable of stable lifelong learning.
翻译:混合专家(MoE)架构能够实现神经网络的高效扩展,但存在专家坍缩问题,即路由机制会收敛于少数主导专家。这降低了模型容量并在适应过程中引发灾难性干扰。我们提出谱正则化混合专家(SR-MoE),通过对路由流形施加几何约束以增强结构模块化。该方法采用双重正则化:谱范数约束限制路由函数的Lipschitz连续性,而稳定秩惩罚则保持专家选择中高维特征多样性。我们通过模块化单次适应任务在不同架构规模与数据集复杂度下评估SR-MoE。实验表明传统线性门控机制随深度增加而失效(因专家纠缠导致准确率下降达4.72%),而SR-MoE能保持结构完整性(平均干扰-0.32%)。我们的谱约束促进了正向知识迁移,支持局部专家更新而无需牺牲全局性能。SR-MoE为构建具备稳定终身学习能力的高容量模块化网络提供了通用解决方案。