Since the invention of GPT2--1.5B in 2019, large language models (LLMs) have transitioned from specialized models to versatile foundation models. The LLMs exhibit impressive zero-shot ability, however, require fine-tuning on local datasets and significant resources for deployment. Traditional fine-tuning techniques with the first-order optimizers require substantial GPU memory that exceeds mainstream hardware capability. Therefore, memory-efficient methods are motivated to be investigated. Model compression techniques can reduce energy consumption, operational costs, and environmental impact so that to support sustainable artificial intelligence advancements. Additionally, large-scale foundation models have expanded to create images, audio, videos, and multi-modal contents, further emphasizing the need for efficient deployment. Therefore, we are motivated to present a comprehensive overview of the prevalent memory-efficient fine-tuning methods over the network edge. We also review the state-of-the-art literatures on model compression to provide a vision on deploying LLMs over the network edge.
翻译:自2019年GPT2--1.5B问世以来,大型语言模型已从专用模型演变为通用的基础模型。这些模型展现出卓越的零样本能力,但需基于本地数据集进行微调,且部署过程需要大量计算资源。传统采用一阶优化器的微调技术需要占用大量GPU内存,往往超出主流硬件的能力范围。因此,研究内存高效的方法具有重要现实意义。模型压缩技术能够降低能耗、运营成本及环境影响,从而为可持续人工智能发展提供支持。此外,大规模基础模型已扩展至图像生成、音频处理、视频合成及多模态内容创作领域,进一步凸显了高效部署的迫切需求。基于此,本文旨在系统综述当前网络边缘环境下主流的内存高效微调方法,并通过梳理模型压缩领域的前沿文献,为大型语言模型在网络边缘的部署提供技术展望。