Mixture-of-Experts (MoE) activates only a subset of experts during inference, allowing the model to maintain low inference FLOPs and latency even as the parameter count scales up. However, since MoE dynamically selects the experts, all the experts need to be loaded into VRAM. Their large parameter size still limits deployment, and offloading, which load experts into VRAM only when needed, significantly increase inference latency. To address this, we propose Mixture of Lookup Experts (MoLE), a new MoE architecture that is efficient in both communication and VRAM usage. In MoLE, the experts are Feed-Forward Networks (FFNs) during training, taking the output of the embedding layer as input. Before inference, these experts can be re-parameterized as lookup tables (LUTs) that retrieves expert outputs based on input ids, and offloaded to storage devices. Therefore, we do not need to perform expert computations during inference. Instead, we directly retrieve the expert's computation results based on input ids and load them into VRAM, and thus the resulting communication overhead is negligible. Experiments show that, with the same FLOPs and VRAM usage, MoLE achieves inference speeds comparable to dense models and significantly faster than MoE with experts offloading, while maintaining performance on par with MoE.
翻译:混合专家模型(MoE)在推理过程中仅激活部分专家,使得模型即使参数规模扩大也能保持较低推理FLOPs和延迟。然而,由于MoE动态选择专家,所有专家均需载入显存。其庞大的参数量仍限制部署效率,而仅在需要时加载专家的卸载方案会显著增加推理延迟。为解决此问题,我们提出混合查找专家模型(MoLE),这是一种在通信与显存使用方面均高效的新型MoE架构。在MoLE中,训练阶段的专家为前馈网络(FFNs),以嵌入层输出作为输入。在推理前,这些专家可被重参数化为基于输入标识符检索专家输出的查找表(LUTs),并卸载至存储设备。因此,在推理过程中我们无需执行专家计算,而是直接根据输入标识符检索专家计算结果并加载至显存,由此产生的通信开销可忽略不计。实验表明,在相同FLOPs和显存使用条件下,MoLE的推理速度与稠密模型相当,且显著快于采用专家卸载的MoE,同时保持与MoE相当的性能表现。