The Mixture of Experts (MoE) framework has become a popular architecture for large language models due to its superior performance over dense models. However, training MoEs from scratch in a large-scale regime is prohibitively expensive. Existing methods mitigate this by pre-training multiple dense expert models independently and using them to initialize an MoE. This is done by using experts' feed-forward network (FFN) to initialize the MoE's experts while merging other parameters. However, this method limits the reuse of dense model parameters to only the FFN layers, thereby constraining the advantages when "upcycling" these models into MoEs. We propose BAM (Branch-Attend-Mix), a simple yet effective method that addresses this shortcoming. BAM makes full use of specialized dense models by not only using their FFN to initialize the MoE layers but also leveraging experts' attention parameters fully by initializing them into a soft-variant of Mixture of Attention (MoA) layers. We explore two methods for upcycling attention parameters: 1) initializing separate attention experts from dense models including all attention parameters for the best model performance; and 2) sharing key and value parameters across all experts to facilitate for better inference efficiency. To further improve efficiency, we adopt a parallel attention transformer architecture to MoEs, which allows the attention experts and FFN experts to be computed concurrently. Our experiments on seed models ranging from 590 million to 2 billion parameters demonstrate that BAM surpasses baselines in both perplexity and downstream task performance, within the same computational and data constraints.
翻译:专家混合(MoE)框架因其性能优于密集模型而成为大型语言模型的流行架构。然而,在大规模场景下从头训练MoE模型成本极高。现有方法通过独立预训练多个密集专家模型并用其初始化MoE来缓解此问题,具体做法是利用专家的前馈网络(FFN)初始化MoE的专家层,同时合并其他参数。但这种方法仅能重复利用密集模型的FFN层参数,从而限制了将这些模型"升级循环"为MoE时的优势。我们提出BAM(分支-关注-混合)方法,以简单而有效的方式解决这一缺陷。BAM充分利用专用密集模型,不仅使用其FFN初始化MoE层,还通过将注意力参数完全初始化为软注意力混合(MoA)层的变体来充分利用专家注意力参数。我们探索了两种注意力参数升级循环方法:1)从包含所有注意力参数的密集模型初始化独立注意力专家,以获得最佳模型性能;2)在所有专家间共享键值参数以提高推理效率。为进一步提升效率,我们在MoE中采用并行注意力Transformer架构,使注意力专家和FFN专家能够并行计算。我们在5.9亿至20亿参数的种子模型上进行的实验表明,在相同计算和数据约束下,BAM在困惑度和下游任务性能上均超越基线方法。