We show how to improve the inference efficiency of an LLM by expanding it into a mixture of sparse experts, where each expert is a copy of the original weights, one-shot pruned for a specific cluster of input values. We call this approach $\textit{Sparse Expansion}$. We show that, for models such as Llama 2 70B, as we increase the number of sparse experts, Sparse Expansion outperforms all other one-shot sparsification approaches for the same inference FLOP budget per token, and that this gap grows as sparsity increases, leading to inference speedups. But why? To answer this, we provide strong evidence that the mixture of sparse experts is effectively $\textit{disentangling}$ the input-output relationship of every individual neuron across clusters of inputs. Specifically, sparse experts approximate the dense neuron output distribution with fewer weights by decomposing the distribution into a collection of simpler ones, each with a separate sparse dot product covering it. Interestingly, we show that the Wasserstein distance between a neuron's output distribution and a Gaussian distribution is an indicator of its entanglement level and contribution to the accuracy of the model. Every layer of an LLM has a fraction of highly entangled Wasserstein neurons, and model performance suffers more when these are sparsified as opposed to others.
翻译:我们提出了一种通过将大语言模型扩展为稀疏专家混合体来提高其推理效率的方法,其中每个专家均为原始权重的副本,并针对特定输入值簇进行一次性剪枝。我们将此方法称为$\textit{稀疏扩展}$。研究表明,对于Llama 2 70B等模型,随着稀疏专家数量的增加,在每令牌相同推理浮点运算预算下,稀疏扩展的表现优于所有其他一次性稀疏化方法,且这一性能差距随稀疏度增加而扩大,从而实现推理加速。但原因何在?为解答此问题,我们提供了有力证据,表明稀疏专家混合体实际上正在$\textit{解缠结}$每个神经元在不同输入簇上的输入-输出关系。具体而言,稀疏专家通过将密集神经元的输出分布分解为多个更简单的分布集合,每个分布由独立的稀疏点积覆盖,从而用更少的权重近似原始分布。有趣的是,我们证明神经元输出分布与高斯分布之间的Wasserstein距离可作为其缠结程度及对模型精度贡献的指标。大语言模型的每一层都包含一定比例的高缠结Wasserstein神经元,当这些神经元被稀疏化时,模型性能受到的损害远大于其他神经元。