Large language models (LLMs) deliver impressive performance but incur prohibitive memory and compute costs at deployment. Model pruning is an effective way to reduce these overheads, yet existing approaches face challenges: unstructured sparsity, where nonzeros can appear anywhere, preserves accuracy but yields irregular access patterns that prevent GPU acceleration, while semi-structured 2:4 sparsity is hardware-friendly but enforces a rigid 50% pattern that degrades model quality. To bridge this gap, we introduce PATCH, a hybrid sparsity framework that enables a continuous sparsity ratio between 0% and 50%. PATCH partitions weight matrices into tiles, assigning each tile to be either dense or 2:4 sparse via a learnable mask selection mechanism. This design provides fine-grained control over accuracy-acceleration tradeoffs and supports non-uniform sparsity across layers, leading to superior overall quality. Across models from 0.5B to 8B parameters, PATCH consistently narrows the gap to dense accuracy while delivering practical speedups. For instance, on LLaMA-2 7B with an A6000 GPU, PATCH achieves 1.18x-1.38x end-to-end speedup over dense baselines while improving accuracy by 0.37%-2.96% compared to the state-of-the-art 2:4 pruning method, MaskLLM.
翻译:大语言模型(LLM)展现出卓越的性能,但在部署时会产生极高的内存与计算开销。模型剪枝是降低此类开销的有效途径,然而现有方法面临双重挑战:非结构化稀疏化允许非零元素任意分布,虽能保持精度但会产生不规则的访存模式,导致无法利用GPU加速;而半结构化2:4稀疏化虽具备硬件友好性,却强制采用固定的50%稀疏模式,会显著降低模型质量。为弥补这一鸿沟,本文提出PATCH——一种支持0%至50%连续稀疏比率的混合稀疏化框架。PATCH将权重矩阵划分为多个分块,并通过可学习的掩码选择机制为每个分块动态分配稠密或2:4稀疏模式。该设计实现了精度与加速效果的细粒度权衡,并支持跨层非均匀稀疏化,从而获得更优的整体性能。在0.5B至8B参数规模的模型实验中,PATCH在保持接近稠密模型精度的同时,均能实现实际加速效果。以LLaMA-2 7B模型与A6000 GPU为例,PATCH相比稠密基线可获得1.18倍至1.38倍的端到端加速,同时较当前最优的2:4剪枝方法MaskLLM提升0.37%至2.96%的准确率。