Specializing large language models (LLMs) for local deployment in domain-specific use cases is necessary for strong performance while meeting latency and privacy constraints. However, conventional task-specific adaptation approaches do not show simultaneous memory saving and inference speedup at deployment time. Practical compression techniques like quantization and pruning require dedicated hardware or kernel support to achieve measured inference speedup. We develop TrimLLM based on the layer-wise specialization phenomenon we empirically observed and verified on contemporary LLMs. TrimLLM reduces the depth of LLMs via progressive layer dropping. We show it retains LLMs' capacity in specific domains and achieves inference speedup irrespective of hardware and deep learning frameworks. We evaluated TrimLLM on LLMs of various sizes for inference; models adapted on medical, legal, and financial datasets all demonstrate $2.1-5.7\times$ inference speedup on consumer GPUs and up to $3.1\times$ speedup on A100 when compared to state-of-the-art model compression algorithms, with no loss in accuracy at 50$\sim$60\% model compression ratio.
翻译:为满足特定领域应用场景的本地部署需求,对大语言模型进行专业化定制是实现高性能同时兼顾延迟与隐私约束的必要途径。然而,传统的任务特定适配方法无法在部署时同步实现内存节省与推理加速。量化、剪枝等实用压缩技术需依赖专用硬件或内核支持才能获得可测量的推理加速效果。基于对当代大语言模型层级专业化现象的实证观察与验证,我们开发了TrimLLM方法。该方法通过渐进式层丢弃机制缩减大语言模型的深度。我们证明该技术能在特定领域保持大语言模型的能力,并在不依赖特定硬件与深度学习框架的条件下实现推理加速。我们在多种规模的大语言模型上评估了TrimLLM的推理性能;在医疗、法律和金融数据集上适配的模型均展现出显著优势:与最先进的模型压缩算法相比,在消费级GPU上实现$2.1-5.7\times$推理加速,在A100上最高达$3.1\times$加速,且在$50\sim60\%$的模型压缩率下保持精度无损。