Although large language models (LLMs) have demonstrated their strong intelligence ability, the high demand for computation and storage hinders their practical application. To this end, many model compression techniques are proposed to increase the efficiency of LLMs. However, current researches only validate their methods on limited models, datasets, metrics, etc, and still lack a comprehensive evaluation under more general scenarios. So it is still a question of which model compression approach we should use under a specific case. To mitigate this gap, we present the Large Language Model Compression Benchmark (LLMCBench), a rigorously designed benchmark with an in-depth analysis for LLM compression algorithms. We first analyze the actual model production requirements and carefully design evaluation tracks and metrics. Then, we conduct extensive experiments and comparison using multiple mainstream LLM compression approaches. Finally, we perform an in-depth analysis based on the evaluation and provide useful insight for LLM compression design. We hope our LLMCBench can contribute insightful suggestions for LLM compression algorithm design and serve as a foundation for future research. Our code is available at https://github.com/AboveParadise/LLMCBench.
翻译:尽管大语言模型(LLMs)已展现出强大的智能能力,但其对计算与存储资源的高需求阻碍了实际应用。为此,众多模型压缩技术被提出以提升LLMs的效率。然而,现有研究仅能在有限模型、数据集和评估指标上验证其方法,仍缺乏更通用场景下的全面评估。因此,在特定应用场景下应选择何种模型压缩方法仍是待解问题。为弥补这一空白,我们提出了大语言模型压缩基准测试(LLMCBench),这是一个经过严谨设计、对LLM压缩算法进行深度分析的基准测试框架。我们首先分析了实际模型生产需求,精心设计了评估维度与指标体系。随后,我们使用多种主流LLM压缩方法进行了大规模实验与对比分析。最后,基于评估结果开展深度解析,为LLM压缩设计提供实用洞见。我们期望LLMCBench能为LLM压缩算法设计提供深刻建议,并为未来研究奠定基础。代码已开源:https://github.com/AboveParadise/LLMCBench。