Recently, there has been a demand to deploy Large Language Models (LLMs) on personal devices such as laptops and smartphones. These LLMs have different model variants when handling different tasks. However, personal devices have limited resources and require reduced storage overhead. To address this, there are two key methods available: the first is model compression, which compresses LLMs into smaller sizes; the second is LoRA, which can transfer an LLM to other tasks with very few parameters, avoiding the storage of multiple model variants in multi-task scenarios by only preserving LoRAs. However, our experiments show that directly combining these two methods yields sub-optimal performance. Considering that the open-source community has already contributed many LoRAs to LLMs, we propose to adapt these existing LoRAs from the LLMs to their compressed version and introduce a Compression-Aware LoRA (CA-LoRA) framework. We incorporate knowledge inheritance and recovery strategies to recover the lost knowledge caused by model compression. Experiment results demonstrate that CA-LoRA outperforms the vanilla LoRA methods applied to a compressed LLM and achieves comparable performance to the non-compressed LLM with existing LoRA modules. The source code of CA-LoRA is available at https://github.com/thunlp/CA-LoRA.
翻译:近年来,将大语言模型部署至笔记本电脑、智能手机等个人设备的需求日益增长。这些大语言模型在处理不同任务时通常需要不同的模型变体。然而,个人设备资源有限,需降低存储开销。针对此问题,现有两种关键方法:其一是模型压缩技术,可将大语言模型压缩至更小规模;其二是LoRA方法,能以极少的参数量将大语言模型迁移至其他任务,在多任务场景中仅需保存LoRA模块而无需存储多个模型变体。但实验表明,直接组合这两种方法会导致次优性能。考虑到开源社区已为大语言模型贡献了大量LoRA模块,我们提出将现有LoRA从原始大语言模型适配至其压缩版本,并引入压缩感知LoRA框架。该框架融合知识继承与恢复策略,以修复模型压缩导致的知识损失。实验结果表明,CA-LoRA在压缩大语言模型上的表现优于原始LoRA方法,且能达到与非压缩大语言模型搭配现有LoRA模块相当的性能。CA-LoRA源代码已发布于https://github.com/thunlp/CA-LoRA。