Structured pruning for large language models (LLMs) has garnered significant academic interest due to its ability to efficiently compress and accelerate LLMs by eliminating redundant weight groups at a coarse-grained granularity. Current structured pruning methods for LLMs typically depend on a singular granularity for assessing weight importance, resulting in notable performance degradation in downstream tasks. Intriguingly, our empirical investigations reveal that utilizing unstructured pruning, which achieves better performance retention by pruning weights at a finer granularity, \emph{i.e.}, individual weights, yields significantly varied sparse LLM structures when juxtaposed to structured pruning. This suggests that evaluating both holistic and individual assessment for weight importance is essential for LLM pruning. Building on this insight, we introduce the Hybrid-grained Weight Importance Assessment (HyWIA), a novel method that merges fine-grained and coarse-grained evaluations of weight importance for the pruning of LLMs. Leveraging an attention mechanism, HyWIA adaptively determines the optimal blend of granularity in weight importance assessments in an end-to-end pruning manner. Extensive experiments on LLaMA-V1/V2, Vicuna, Baichuan, and Bloom across various benchmarks demonstrate the effectiveness of HyWIA in pruning LLMs. For example, HyWIA surpasses the cutting-edge LLM-Pruner by an average margin of 2.82\% in accuracy across seven downstream tasks when pruning LLaMA-7B by 50\%.
翻译:大语言模型(LLMs)的结构化剪枝因其能够通过粗粒度地消除冗余权重组来高效压缩和加速LLMs,已引起学术界广泛关注。当前针对LLMs的结构化剪枝方法通常依赖于单一粒度来评估权重重要性,导致在下游任务中出现显著的性能下降。有趣的是,我们的实证研究发现,利用非结构化剪枝(通过更细粒度——即单个权重——进行剪枝以实现更好的性能保持)与结构化剪枝相比,会产生显著不同的稀疏LLM结构。这表明,对权重重要性进行整体和个体的双重评估对于LLM剪枝至关重要。基于这一洞见,我们提出了混合粒度权重重要性评估(HyWIA),这是一种将细粒度和粗粒度权重重要性评估相结合用于LLM剪枝的新方法。HyWIA利用注意力机制,以端到端的剪枝方式自适应地确定权重重要性评估中最佳粒度组合。在LLaMA-V1/V2、Vicuna、Baichuan和Bloom等多个模型及基准测试上进行的大量实验证明了HyWIA在LLM剪枝中的有效性。例如,在对LLaMA-7B进行50%剪枝时,HyWIA在七个下游任务的平均准确率上超越了前沿方法LLM-Pruner达2.82%。