Efficient adaption of large language models (LLMs) on edge devices is essential for applications requiring continuous and privacy-preserving adaptation and inference. However, existing tuning techniques fall short because of the high computation and memory overheads. To this end, we introduce a computation- and memory-efficient LLM tuning framework, called Edge-LLM, to facilitate affordable and effective LLM adaptation on edge devices. Specifically, Edge-LLM features three core components: (1) a layer-wise unified compression (LUC) technique to reduce the computation overhead by generating layer-wise pruning sparsity and quantization bit-width policies, (2) an adaptive layer tuning and voting scheme to reduce the memory overhead by reducing the backpropagation depth, and (3) a complementary hardware scheduling strategy to handle the irregular computation patterns introduced by LUC and adaptive layer tuning, thereby achieving efficient computation and data movements. Extensive experiments demonstrate that Edge-LLM achieves a 2.92x speed up and a 4x memory overhead reduction as compared to vanilla tuning methods with comparable task accuracy. Our code is available at https://github.com/GATECH-EIC/Edge-LLM
翻译:在边缘设备上高效适配大语言模型对于需要持续、隐私保护的适配与推理应用至关重要。然而,现有调优技术因高昂的计算与内存开销而难以满足需求。为此,我们提出一种计算与内存高效的大语言模型调优框架,称为Edge-LLM,以促进在边缘设备上实现经济且有效的大语言模型适配。具体而言,Edge-LLM包含三个核心组件:(1) 层级统一压缩技术,通过生成层级剪枝稀疏度与量化比特宽度策略来降低计算开销;(2) 自适应层调优与投票方案,通过减少反向传播深度来降低内存开销;(3) 互补的硬件调度策略,用于处理由层级统一压缩和自适应层调优引入的不规则计算模式,从而实现高效的计算与数据移动。大量实验表明,与任务精度相当的原始调优方法相比,Edge-LLM实现了2.92倍的加速和4倍的内存开销降低。我们的代码公开于 https://github.com/GATECH-EIC/Edge-LLM。