We present OLMo 2, the next generation of our fully open language models. OLMo 2 includes dense autoregressive models with improved architecture and training recipe, pretraining data mixtures, and instruction tuning recipes. Our modified model architecture and training recipe achieve both better training stability and improved per-token efficiency. Our updated pretraining data mixture introduces a new, specialized data mix called Dolmino Mix 1124, which significantly improves model capabilities across many downstream task benchmarks when introduced via late-stage curriculum training (i.e. specialized data during the annealing phase of pretraining). Finally, we incorporate best practices from T\"ulu 3 to develop OLMo 2-Instruct, focusing on permissive data and extending our final-stage reinforcement learning with verifiable rewards (RLVR). Our OLMo 2 base models sit at the Pareto frontier of performance to compute, often matching or outperforming open-weight only models like Llama 3.1 and Qwen 2.5 while using fewer FLOPs and with fully transparent training data, code, and recipe. Our fully open OLMo 2-Instruct models are competitive with or surpassing open-weight only models of comparable size, including Qwen 2.5, Llama 3.1 and Gemma 2. We release all OLMo 2 artifacts openly -- models at 7B and 13B scales, both pretrained and post-trained, including their full training data, training code and recipes, training logs and thousands of intermediate checkpoints. The final instruction model is available on the Ai2 Playground as a free research demo.
翻译:我们推出OLMo 2,这是我们完全开源语言模型的下一代产品。OLMo 2包含采用改进架构与训练方案、预训练数据混合策略以及指令调优方案的密集自回归模型。我们改进的模型架构与训练方案在实现更好训练稳定性的同时,提升了单令牌处理效率。我们更新的预训练数据混合策略引入了名为Dolmino Mix 1124的新型专用数据混合方案,通过后期课程学习(即在预训练退火阶段引入专用数据)显著提升了模型在众多下游任务基准测试中的能力。最后,我们借鉴T\"ulu 3的最佳实践开发了OLMo 2-Instruct,重点关注许可数据使用,并通过可验证奖励的强化学习(RLVR)扩展了最终训练阶段。我们的OLMo 2基础模型在性能与计算量的帕累托前沿占据优势,在使用更少FLOPs且完全公开训练数据、代码与方案的情况下,其表现常可匹配甚至超越仅开放权重的模型(如Llama 3.1和Qwen 2.5)。我们完全开源的OLMo 2-Instruct模型在同等规模模型中具备竞争力或实现超越,包括Qwen 2.5、Llama 3.1和Gemma 2。我们全面开放所有OLMo 2成果——包含7B与13B规模的预训练及后训练模型、完整训练数据、训练代码与方案、训练日志及数千个中间检查点。最终指令模型已在Ai2 Playground作为免费研究演示版发布。