The escalating size of Mixture-of-Experts (MoE) based Large Language Models (LLMs) presents significant computational and memory challenges, necessitating innovative solutions to enhance efficiency without compromising model accuracy. Structured sparsity emerges as a compelling strategy to address these challenges by leveraging the emerging sparse computing hardware. Prior works mainly focus on the sparsity in model parameters, neglecting the inherent sparse patterns in activations. This oversight can lead to additional computational costs associated with activations, potentially resulting in suboptimal performance. This paper presents Samoyeds, an innovative acceleration system for MoE LLMs utilizing Sparse Tensor Cores (SpTCs). Samoyeds is the first to apply sparsity simultaneously to both activations and model parameters. It introduces a bespoke sparse data format tailored for MoE computation and develops a specialized sparse-sparse matrix multiplication kernel. Furthermore, Samoyeds incorporates systematic optimizations specifically designed for the execution of dual-side structured sparse MoE LLMs on SpTCs, further enhancing system performance. Evaluations show that Samoyeds outperforms SOTA works by up to 1.99$\times$ at the kernel level and 1.58$\times$ at the model level. Moreover, it enhances memory efficiency, increasing maximum supported batch sizes by 4.41$\times$ on average. Additionally, Samoyeds surpasses existing SOTA structured sparse solutions in both model accuracy and hardware portability.
翻译:基于混合专家(MoE)的大型语言模型(LLM)规模不断增大,带来了显著的计算和内存挑战,亟需在不牺牲模型精度的前提下提升效率的创新解决方案。结构化稀疏性通过利用新兴的稀疏计算硬件,成为一种应对这些挑战的有效策略。先前的研究主要关注模型参数中的稀疏性,而忽视了激活中固有的稀疏模式。这种忽视可能导致与激活相关的额外计算开销,进而造成次优性能。本文提出了Samoyeds,一个利用稀疏张量核心(SpTCs)加速MoE LLM的创新系统。Samoyeds首次将稀疏性同时应用于激活和模型参数。它引入了一种专为MoE计算定制的稀疏数据格式,并开发了一个专门的稀疏-稀疏矩阵乘法内核。此外,Samoyeds集成了专门为在SpTCs上执行双侧结构化稀疏MoE LLM而设计的系统性优化,进一步提升了系统性能。评估结果表明,Samoyeds在内核层面比现有最优(SOTA)工作快达1.99$\times$,在模型层面快达1.58$\times$。同时,它提高了内存效率,平均将最大支持的批处理大小提升了4.41$\times$。此外,Samoyeds在模型精度和硬件可移植性方面均超越了现有的SOTA结构化稀疏解决方案。