While scaling laws have been continuously validated in large language models (LLMs) with increasing model parameters, the inherent tension between the inference demands of LLMs and the limited resources of edge devices poses a critical challenge to the development of edge intelligence. Recently, numerous small language models have emerged, aiming to distill the capabilities of LLMs into smaller footprints. However, these models often retain the fundamental architectural principles of their larger counterparts, still imposing considerable strain on the storage and bandwidth capacities of edge devices. In this paper, we introduce the PLM, a Peripheral Language Model, developed through a co-design process that jointly optimizes model architecture and edge system constraints. The PLM utilizes a Multi-head Latent Attention mechanism and employs the squared ReLU activation function to encourage sparsity, thereby reducing peak memory footprint during inference. During training, we collect and reorganize open-source datasets, implement a multi-phase training strategy, and empirically investigate the Warmup-Stable-Decay-Constant (WSDC) learning rate scheduler. Additionally, we incorporate Reinforcement Learning from Human Feedback (RLHF) by adopting the ARIES preference learning approach. Following a two-phase SFT process, this method yields performance gains of 2% in general tasks, 9% in the GSM8K task, and 11% in coding tasks. In addition to its novel architecture, evaluation results demonstrate that PLM outperforms existing small language models trained on publicly available data while maintaining the lowest number of activated parameters. Furthermore, deployment across various edge devices, including consumer-grade GPUs, mobile phones, and Raspberry Pis, validates PLM's suitability for peripheral applications. The PLM series models are publicly available at https://github.com/plm-team/PLM.
翻译:尽管缩放定律在参数不断增长的大语言模型(LLM)中持续得到验证,但LLM的推理需求与边缘设备有限资源之间的固有矛盾,对边缘智能的发展构成了关键挑战。近期涌现出众多小语言模型,旨在将LLM的能力提炼至更小的模型规模。然而,这些模型通常保留了大模型的基本架构原则,仍对边缘设备的存储和带宽容量造成显著压力。本文提出PLM(外围语言模型),通过协同优化模型架构与边缘系统约束的联合设计流程开发而成。PLM采用多头潜在注意力机制,并利用平方ReLU激活函数以促进稀疏性,从而降低推理过程中的峰值内存占用。在训练阶段,我们收集并重组开源数据集,实施多阶段训练策略,并通过实证研究探讨了Warmup-Stable-Decay-Constant(WSDC)学习率调度器。此外,通过采用ARIES偏好学习方法,我们引入了基于人类反馈的强化学习(RLHF)。经过两阶段监督微调后,该方法在通用任务上获得2%的性能提升,在GSM8K任务上提升9%,在代码任务上提升11%。除了新颖的架构设计,评估结果表明PLM在保持最低激活参数量的同时,其性能优于基于公开数据训练的现有小语言模型。此外,在消费级GPU、手机和树莓派等多种边缘设备上的部署验证了PLM在外围应用场景中的适用性。PLM系列模型已在https://github.com/plm-team/PLM 公开提供。