We present Fox-1, a series of small language models (SLMs) consisting of Fox-1-1.6B and Fox-1-1.6B-Instruct-v0.1. These models are pre-trained on 3 trillion tokens of web-scraped document data and fine-tuned with 5 billion tokens of instruction-following and multi-turn conversation data. Aiming to improve the pre-training efficiency, Fox-1-1.6B model introduces a novel 3-stage data curriculum across all the training data with 2K-8K sequence length. In architecture design, Fox-1 features a deeper layer structure, an expanded vocabulary, and utilizes Grouped Query Attention (GQA), offering a performant and efficient architecture compared to other SLMs. Fox-1 achieves better or on-par performance in various benchmarks compared to StableLM-2-1.6B, Gemma-2B, Qwen1.5-1.8B, and OpenELM1.1B, with competitive inference speed and throughput. The model weights have been released under the Apache 2.0 license, where we aim to promote the democratization of LLMs and make them fully accessible to the whole open-source community.
翻译:本文介绍了 Fox-1,一个包含 Fox-1-1.6B 和 Fox-1-1.6B-Instruct-v0.1 的小型语言模型系列。这些模型在 3 万亿个网页抓取的文档数据标记上进行预训练,并使用 50 亿个遵循指令和多轮对话数据标记进行微调。为了提高预训练效率,Fox-1-1.6B 模型在所有训练数据上引入了一种新颖的 3 阶段数据课程,序列长度为 2K 至 8K。在架构设计上,Fox-1 采用了更深的层结构、扩展的词汇表,并利用了分组查询注意力机制,相比其他小型语言模型提供了一个高性能且高效的架构。在多项基准测试中,与 StableLM-2-1.6B、Gemma-2B、Qwen1.5-1.8B 和 OpenELM1.1B 相比,Fox-1 取得了更好或相当的性能,同时具备有竞争力的推理速度和吞吐量。模型权重已在 Apache 2.0 许可证下发布,旨在促进大型语言模型的民主化,使其对整个开源社区完全开放。