Activation sparsity refers to the existence of considerable weakly-contributed elements among activation outputs. As a prevalent property of the models using the ReLU activation function, activation sparsity has been proven a promising paradigm to boost model inference efficiency. Nevertheless, most large language models (LLMs) adopt activation functions without intrinsic activation sparsity (e.g., GELU and Swish). Some recent efforts have explored introducing ReLU or its variants as the substitutive activation function to help LLMs achieve activation sparsity and inference acceleration, but few can simultaneously obtain high sparsity and comparable model performance. This paper introduces a simple and effective sparsification method named "ProSparse" to push LLMs for higher activation sparsity while maintaining comparable performance. Specifically, after substituting the activation function of LLMs with ReLU, ProSparse adopts progressive sparsity regularization with a factor smoothly increasing along the multi-stage sine curves. This can enhance activation sparsity and mitigate performance degradation by avoiding radical shifts in activation distributions. With ProSparse, we obtain high sparsity of 89.32% for LLaMA2-7B, 88.80% for LLaMA2-13B, and 87.89% for end-size MiniCPM-1B, respectively, achieving comparable performance to their original Swish-activated versions. These present the most sparsely activated models among open-source LLaMA versions and competitive end-size models, considerably surpassing ReluLLaMA-7B (66.98%) and ReluLLaMA-13B (71.56%). Our inference acceleration experiments further demonstrate the significant practical acceleration potential of LLMs with higher activation sparsity, obtaining up to 4.52$\times$ inference speedup.
翻译:激活稀疏性是指激活输出中存在大量贡献微弱的元素。作为使用ReLU激活函数模型的普遍特性,激活稀疏性已被证明是提升模型推理效率的有效范式。然而,大多数大语言模型(LLMs)采用的激活函数不具备内在的激活稀疏性(例如GELU和Swish)。近期一些研究尝试引入ReLU或其变体作为替代激活函数,以帮助LLMs实现激活稀疏性和推理加速,但鲜有方法能同时获得高稀疏性和可比的模型性能。本文提出了一种简单有效的稀疏化方法"ProSparse",旨在推动LLMs实现更高的激活稀疏性,同时保持可比的性能。具体而言,在将LLMs的激活函数替换为ReLU后,ProSparse采用沿多阶段正弦曲线平滑增长的渐进稀疏正则化因子。该方法能增强激活稀疏性,并通过避免激活分布的剧烈变化来缓解性能下降。通过ProSparse,我们分别获得了LLaMA2-7B 89.32%、LLaMA2-13B 88.80%以及终端尺寸模型MiniCPM-1B 87.89%的高稀疏度,其性能与原始Swish激活版本相当。这些模型成为开源LLaMA版本中激活最稀疏的模型以及具有竞争力的终端尺寸模型,显著超越了ReluLLaMA-7B(66.98%)和ReluLLaMA-13B(71.56%)。我们的推理加速实验进一步证明了具有更高激活稀疏性的LLMs具有显著的实际加速潜力,最高可获得4.52$\times$的推理加速。