This paper addresses the growing need for efficient large language models (LLMs) on mobile devices, driven by increasing cloud costs and latency concerns. We focus on designing top-quality LLMs with fewer than a billion parameters, a practical choice for mobile deployment. Contrary to prevailing belief emphasizing the pivotal role of data and parameter quantity in determining model quality, our investigation underscores the significance of model architecture for sub-billion scale LLMs. Leveraging deep and thin architectures, coupled with embedding sharing and grouped-query attention mechanisms, we establish a strong baseline network denoted as MobileLLM, which attains a remarkable 2.7%/4.3% accuracy boost over preceding 125M/350M state-of-the-art models. Additionally, we propose an immediate block-wise weight-sharing approach with no increase in model size and only marginal latency overhead. The resultant models, denoted as MobileLLM-LS, demonstrate a further accuracy enhancement of 0.7%/0.8% than MobileLLM 125M/350M. Moreover, MobileLLM model family shows significant improvements compared to previous sub-billion models on chat benchmarks, and demonstrates close correctness to LLaMA-v2 7B in API calling tasks, highlighting the capability of small models for common on-device use cases.
翻译:本文针对移动设备上高效大型语言模型日益增长的需求展开研究,该需求主要由不断攀升的云服务成本和延迟问题所驱动。我们专注于设计参数少于十亿的高质量语言模型,这是移动端部署的实用选择。与当前普遍认为数据和参数量是决定模型质量关键因素的观点相反,我们的研究强调了模型架构在亚十亿规模语言模型中的重要性。通过采用深层窄幅架构,结合嵌入共享和分组查询注意力机制,我们构建了一个名为MobileLLM的强基线网络,该网络在125M/350M参数量级上相比先前最优模型实现了2.7%/4.3%的显著精度提升。此外,我们提出了一种即时块级权重共享方法,该方法不增加模型尺寸且仅带来边际延迟开销。由此产生的MobileLLM-LS模型在125M/350M参数量级上比MobileLLM进一步实现了0.7%/0.8%的精度提升。值得注意的是,MobileLLM模型家族在对话基准测试中相比先前亚十亿参数模型展现出显著改进,并在API调用任务中表现出与LLaMA-v2 7B相近的正确率,这凸显了小模型在常见设备端应用场景中的强大能力。