To deploy LLMs on resource-contained platforms such as mobile robotics and wearables, non-transformers LLMs have achieved major breakthroughs. Recently, a novel RNN-based LLM family, Repentance Weighted Key Value (RWKV) models have shown promising results in text generation on resource-constrained devices thanks to their computational efficiency. However, these models remain too large to be deployed on embedded devices due to their high parameter count. In this paper, we propose an efficient suite of compression techniques, tailored to the RWKV architecture. These techniques include low-rank approximation, sparsity predictors, and clustering head, designed to align with the model size. Our methods compress the RWKV models by 4.95--3.8x with only 2.95pp loss in accuracy.
翻译:为了在移动机器人和可穿戴设备等资源受限平台上部署大型语言模型,非Transformer架构的LLM已取得重大突破。近期,一种基于循环神经网络的新型LLM系列——Repentance Weighted Key Value(RWKV)模型凭借其计算效率,在资源受限设备的文本生成任务中展现出优异性能。然而,由于参数量庞大,现有模型仍难以部署于嵌入式设备。本文提出一套针对RWKV架构的高效压缩技术组合,包括低秩近似、稀疏性预测器和聚类头等方法,这些技术可根据模型规模进行定制化适配。我们的方法将RWKV模型压缩至原体积的1/4.95--1/3.8,仅产生2.95个百分点的精度损失。