Large Language Models (LLMs) present significant computational and memory challenges due to their extensive size, making pruning essential for their efficient deployment. Existing one-shot pruning methods often apply uniform sparsity constraints across layers or within each layer, resulting in suboptimal performance, especially at high sparsity ratios. This work introduces TRIM (Targeted Row-wise Iterative Metric-driven pruning), a novel approach that applies varying sparsity ratios to individual output dimensions (rows) within each layer. TRIM employs an iterative adjustment process guided by quality metrics to optimize dimension-wise sparsity allocation, focusing on reducing variance in quality retention across outputs to preserve critical information. TRIM can be seamlessly integrated with existing layer-wise pruning strategies. Our evaluations on perplexity and zero-shot tasks across diverse LLM families (Qwen2.5, LLaMA-2, and OPT) and sparsity levels demonstrate that TRIM achieves new state-of-the-art results and enhances stability. For instance, at 80% sparsity, TRIM reduces perplexity by 48% for Qwen2.5-14B and over 90% for OPT-13B compared to baseline methods. We conclude that fine-grained, dimension-wise sparsity adaptation is crucial for pushing the limits of extreme LLM compression. Code available at: https://github.com/flobk/TRIM
翻译:大型语言模型(LLM)因其庞大的规模带来了显著的计算和内存挑战,使得剪枝成为其高效部署的关键。现有的一次性剪枝方法通常在层间或层内采用统一的稀疏度约束,导致性能欠佳,尤其是在高稀疏比下。本文提出了TRIM(目标行向迭代度量驱动剪枝),这是一种新颖的方法,对每个层内的各个输出维度(行)施加不同的稀疏比。TRIM采用由质量度量引导的迭代调整过程,以优化维度级稀疏度分配,其核心在于降低各输出间质量保留的方差,从而保护关键信息。TRIM能够与现有的层间剪枝策略无缝集成。我们在多种LLM系列(Qwen2.5、LLaMA-2和OPT)和不同稀疏度水平上进行的困惑度与零样本任务评估表明,TRIM取得了新的最先进结果并提升了稳定性。例如,在80%稀疏度下,与基线方法相比,TRIM将Qwen2.5-14B的困惑度降低了48%,并将OPT-13B的困惑度降低了90%以上。我们的结论是,细粒度的、维度级的稀疏度自适应对于突破极端LLM压缩的极限至关重要。代码发布于:https://github.com/flobk/TRIM