Large Language Models (LLMs) have revolutionized natural language processing by achieving state-of-the-art results across a variety of tasks. However, the computational demands of LLM inference, including high memory consumption and slow processing speeds, pose significant challenges for real-world applications, particularly on resource-constrained devices. Efficient inference is crucial for scaling the deployment of LLMs to a broader range of platforms, including mobile and edge devices. This survey explores contemporary techniques in model compression that address these challenges by reducing the size and computational requirements of LLMs while maintaining their performance. We focus on model-level compression methods, including quantization, knowledge distillation, and pruning, as well as system-level optimizations like KV cache efficient design. Each of these methodologies offers a unique approach to optimizing LLMs, from reducing numerical precision to transferring knowledge between models and structurally simplifying neural networks. Additionally, we discuss emerging trends in system-level design that further enhance the efficiency of LLM inference. This survey aims to provide a comprehensive overview of current advancements in model compression and their potential to make LLMs more accessible and practical for diverse applications.
翻译:大语言模型(LLMs)通过在各种任务上取得最先进的结果,彻底改变了自然语言处理领域。然而,LLM推理的计算需求,包括高内存消耗和较慢的处理速度,对实际应用构成了重大挑战,特别是在资源受限的设备上。高效的推理对于将LLM部署扩展到更广泛的平台(包括移动和边缘设备)至关重要。本综述探讨了当代模型压缩技术,这些技术通过减小LLMs的规模并降低其计算需求,同时保持其性能,以应对这些挑战。我们重点关注模型级压缩方法,包括量化、知识蒸馏和剪枝,以及系统级优化,如KV缓存高效设计。这些方法各自提供了优化LLMs的独特途径,从降低数值精度到在模型之间迁移知识,再到结构上简化神经网络。此外,我们讨论了系统级设计的新兴趋势,这些趋势进一步提升了LLM推理的效率。本综述旨在全面概述当前模型压缩技术的进展及其在使LLMs更易于获取并适用于多样化应用方面的潜力。