Recently, state-of-the-art approaches for pruning large pre-trained models (LPMs) have demonstrated that the training-free removal of non-critical residual blocks in Transformers is viable for reducing model size, achieving results that outperform previous training-free pruning approaches. Motivated by these findings, we extend BlockPruner (Zhong et al., 2024) and propose MultiPruner, a pruning approach that surpasses recent training-free pruning methods by adopting a multidimensional, iterative, fine-grained pruning strategy. In MultiPruner, multidimensional pruning reinstates the structural balance in block-pruned models by sequentially compressing along three dimensions: i) residual blocks, ii) channels of multilayer perceptrons (MLP), and iii) attention heads. This solution enhances zero-shot accuracy on downstream tasks compared to other techniques while improving model compression ratios, producing compressed models with fewer computing and memory requirements. Extensive experiments demonstrate the advantages of the proposed method across various large pre-trained models. The code and pruning configurations are available at https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning.
翻译:近期,针对大型预训练模型(LPMs)的剪枝方法取得突破性进展,研究表明无需额外训练即可移除Transformer中非关键残差块,从而实现模型压缩,其效果超越了先前的无训练剪枝方法。受此启发,我们在BlockPruner(Zhong等人,2024)基础上提出MultiPruner——一种采用多维、迭代、细粒度剪枝策略的新方法,其性能优于当前主流无训练剪枝技术。MultiPruner通过沿三个维度顺序压缩来恢复块剪枝模型的结构平衡:i)残差块,ii)多层感知机(MLP)通道,iii)注意力头。相较于其他技术,该方案在提升模型压缩率的同时增强了在下游任务上的零样本准确率,所生成的压缩模型具有更低计算与内存需求。大量实验证明该方法在多种大型预训练模型上均具优势。代码与剪枝配置详见https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning。