Large Language Models (LLMs) have been emerging as prominent AI models for solving many natural language tasks due to their high performance (e.g., accuracy) and capabilities in generating high-quality responses to the given inputs. However, their large computational cost, huge memory footprints, and high processing power/energy make it challenging for their embedded deployments. Amid several tinyLLMs, recent works have proposed spike-driven language models (SLMs) for significantly reducing the processing power/energy of LLMs. However, their memory footprints still remain too large for low-cost and resource-constrained embedded devices. Manual quantization approach may effectively compress SLM memory footprints, but it requires a huge design time and compute power to find the quantization setting for each network, hence making this approach not-scalable for handling different networks, performance requirements, and memory budgets. To bridge this gap, we propose QSLM, a novel framework that performs automated quantization for compressing pre-trained SLMs, while meeting the performance and memory constraints. To achieve this, QSLM first identifies the hierarchy of the given network architecture and the sensitivity of network layers under quantization, then employs a tiered quantization strategy (e.g., global-, block-, and module-level quantization) while leveraging a multi-objective performance-and-memory trade-off function to select the final quantization setting. Experimental results indicate that our QSLM reduces memory footprint by up to 86.5%, reduces power consumption by up to 20%, maintains high performance across different tasks (i.e., by up to 84.4% accuracy of sentiment classification on the SST-2 dataset and perplexity score of 23.2 for text generation on the WikiText-2 dataset) close to the original non-quantized model while meeting the performance and memory constraints.
翻译:大型语言模型(LLM)因其高性能(例如准确性)以及为给定输入生成高质量响应的能力,已成为解决众多自然语言任务的重要人工智能模型。然而,其巨大的计算开销、庞大的内存占用以及高处理功耗/能耗,使其在嵌入式部署中面临挑战。在众多微型LLM中,近期研究提出了脉冲驱动语言模型(SLM),以显著降低LLM的处理功耗/能耗。然而,对于低成本、资源受限的嵌入式设备而言,其内存占用仍然过大。手动量化方法或许能有效压缩SLM的内存占用,但为每个网络寻找合适的量化配置需要大量的设计时间和计算资源,因此该方法难以扩展以适应不同的网络结构、性能要求和内存预算。为弥补这一差距,我们提出了QSLM,一个新颖的框架,用于对预训练的SLM进行自动化量化压缩,同时满足性能和内存约束。为实现这一目标,QSLM首先识别给定网络架构的层次结构以及各网络层对量化的敏感度,然后采用分层量化策略(例如,全局级、块级和模块级量化),并利用一个多目标性能-内存权衡函数来选择最终的量化配置。实验结果表明,我们的QSLM能将内存占用降低高达86.5%,功耗降低高达20%,并在满足性能和内存约束的同时,在不同任务上保持高性能(例如,在SST-2数据集上的情感分类准确率高达原始非量化模型的84.4%,在WikiText-2数据集上的文本生成困惑度得分为23.2),接近原始非量化模型。