The high computational costs of large language models (LLMs) have led to a flurry of research on LLM compression, via methods such as quantization, sparsification, or structured pruning. A new frontier in this area is given by \emph{dynamic, non-uniform} compression methods, which adjust the compression levels (e.g., sparsity) per-block or even per-layer in order to minimize accuracy loss, while guaranteeing a global compression threshold. Yet, current methods rely on heuristics for identifying the "importance" of a given layer towards the loss, based on assumptions such as \emph{error monotonicity}, i.e. that the end-to-end model compression error is proportional to the sum of layer-wise errors. In this paper, we revisit this area, and propose a new and general approach for dynamic compression that is provably optimal in a given input range. We begin from the motivating observation that, in general, \emph{error monotonicity does not hold for LLMs}: compressed models with lower sum of per-layer errors can perform \emph{worse} than models with higher error sums. To address this, we propose a new general evolutionary framework for dynamic LLM compression called EvoPress, which has provable convergence, and low sample and evaluation complexity. We show that these theoretical guarantees lead to highly competitive practical performance for dynamic compression of Llama, Mistral and Phi models. Via EvoPress, we set new state-of-the-art results across all compression approaches: structural pruning (block/layer dropping), unstructured sparsity, as well as quantization with dynamic bitwidths. Our code is available at https://github.com/IST-DASLab/EvoPress.
翻译:大型语言模型(LLMs)的高计算成本催生了大量关于LLM压缩的研究,主要方法包括量化、稀疏化或结构化剪枝。该领域的新前沿在于**动态非均匀**压缩方法,这些方法通过逐块甚至逐层调整压缩级别(例如稀疏度),在保证全局压缩阈值的同时最小化精度损失。然而,现有方法依赖于启发式策略来评估特定层对损失函数的“重要性”,其基础是诸如**误差单调性**等假设——即端到端模型压缩误差与各层误差之和成正比。本文重新审视这一领域,提出了一种新颖且通用的动态压缩方法,该方法在给定输入范围内可证明是最优的。我们的研究始于一个关键观察:对于LLMs而言,**误差单调性通常并不成立**——具有较低层误差总和的压缩模型可能比具有较高误差总和的模型表现**更差**。为解决这一问题,我们提出了名为EvoPress的新型通用进化框架用于动态LLM压缩,该框架具有可证明的收敛性以及较低的样本与评估复杂度。我们证明,这些理论保证为Llama、Mistral和Phi模型的动态压缩带来了极具竞争力的实际性能。通过EvoPress,我们在所有压缩方法中均取得了新的最先进成果:包括结构化剪枝(块/层丢弃)、非结构化稀疏化以及动态位宽量化。代码已发布于https://github.com/IST-DASLab/EvoPress。