Large language models (LLMs) based on the Transformer architecture are widely employed across various domains and tasks. However, their increasing size imposes significant hardware demands, limiting practical deployment. To mitigate this, model pruning techniques have been developed to create more efficient models while maintaining high performance. Despite this, post-training after pruning is crucial for performance recovery and can be resource-intensive. This paper investigates the post-training requirements of pruned LLMs and introduces a scaling law to determine the optimal amount of post-training data. Post-training experiments with the Llama-3 and Qwen-2.5 series models, pruned using depth pruning, width pruning, and 2:4 semi-structured pruning, show that higher pruning ratios necessitate more post-training data for performance recovery, whereas larger LLMs require less. The proposed scaling law predicts a model's loss based on its parameter counts before and after pruning, as well as the post-training token counts. Furthermore, we find that the scaling law established from smaller LLMs can be reliably extrapolated to larger LLMs. This work provides valuable insights into the post-training of pruned LLMs and offers a practical scaling law for optimizing post-training data usage.
翻译:基于Transformer架构的大语言模型(LLM)被广泛应用于各个领域和任务中。然而,其日益增大的规模带来了显著的硬件需求,限制了实际部署。为了缓解这一问题,模型剪枝技术被开发出来,旨在创建更高效的模型同时保持高性能。尽管如此,剪枝后的微调对于性能恢复至关重要,且可能耗费大量资源。本文研究了剪枝后LLM的微调需求,并引入了一个缩放定律来确定最优的微调数据量。通过对Llama-3和Qwen-2.5系列模型进行深度剪枝、宽度剪枝以及2:4半结构化剪枝后的微调实验,结果表明,更高的剪枝比例需要更多的微调数据以实现性能恢复,而更大的LLM所需数据量更少。所提出的缩放定律可根据模型剪枝前后的参数量以及微调所用的token数量来预测其损失。此外,我们发现基于较小LLM建立的缩放定律可以可靠地外推至更大的LLM。这项工作为剪枝后LLM的微调提供了有价值的见解,并为优化微调数据使用提供了一个实用的缩放定律。