The rise of large language models (LLMs) has significantly advanced various natural language processing (NLP) tasks. However, the resource demands of these models pose substantial challenges. Structured pruning is an effective approach to reducing model size, but it often results in significant accuracy degradation, necessitating parameter updates to adapt. Unfortunately, such fine-tuning requires substantial memory, which limits its applicability. To address these challenges, we introduce quantization into the structured pruning framework to reduce memory consumption during both fine-tuning and inference. However, the combined errors from pruning and quantization increase the difficulty of fine-tuning, requiring a more refined quantization scheme. To this end, we propose QPruner, a novel framework that employs structured pruning to reduce model size, followed by a layer-wise mixed-precision quantization scheme. Quantization precisions are assigned to each layer based on their importance to the target task, and Bayesian optimization is employed to refine precision allocation strategies, ensuring a balance between model accuracy and memory efficiency. Extensive experiments on benchmark datasets demonstrate that QPruner significantly outperforms existing methods in memory savings while maintaining or improving model performance.
翻译:大型语言模型(LLMs)的兴起显著推动了各类自然语言处理(NLP)任务的发展。然而,这些模型对计算资源的高需求构成了严峻挑战。结构化剪枝是减小模型规模的有效方法,但通常会导致显著的精度下降,因而需要通过参数更新进行适配。遗憾的是,此类微调过程需要大量内存,限制了其实际应用范围。为应对这些挑战,我们在结构化剪枝框架中引入量化技术,以降低微调与推理阶段的内存消耗。然而,剪枝与量化带来的复合误差增加了微调难度,需要更精细的量化方案。为此,我们提出QPruner——一种创新框架,该框架首先采用结构化剪枝压缩模型规模,继而实施分层混合精度量化方案。量化精度根据各网络层对目标任务的重要性进行分配,并采用贝叶斯优化方法优化精度分配策略,从而在模型精度与内存效率间取得平衡。在基准数据集上的大量实验表明,QPruner在保持或提升模型性能的同时,其内存节约效果显著优于现有方法。