Horseshoe mixtures-of-experts (HS-MoE) models provide a Bayesian framework for sparse expert selection in mixture-of-experts architectures. We combine the horseshoe prior's adaptive global-local shrinkage with input-dependent gating, yielding data-adaptive sparsity in expert usage. Our primary methodological contribution is a particle learning algorithm for sequential inference, in which the filter is propagated forward in time while tracking only sufficient statistics. We also discuss how HS-MoE relates to modern mixture-of-experts layers in large language models, which are deployed under extreme sparsity constraints (e.g., activating a small number of experts per token out of a large pool).
翻译:马蹄铁专家混合模型(HS-MoE)为专家混合架构中的稀疏专家选择提供了一个贝叶斯框架。我们将马蹄铁先验的自适应全局-局部收缩特性与输入相关的门控机制相结合,从而在专家使用中实现了数据自适应的稀疏性。我们的主要方法学贡献是一种用于序列推断的粒子学习算法,其中滤波器随时间向前传播,同时仅跟踪充分统计量。我们还讨论了HS-MoE与大型语言模型中现代专家混合层的关系,后者在极端稀疏性约束下部署(例如,从大型专家池中为每个令牌激活少量专家)。