Language model post-training is applied to refine behaviors and unlock new skills across a wide range of recent language models, but open recipes for applying these techniques lag behind proprietary ones. The underlying training data and recipes for post-training are simultaneously the most important pieces of the puzzle and the portion with the least transparency. To bridge this gap, we introduce Tulu 3, a family of fully-open state-of-the-art post-trained models, alongside its data, code, and training recipes, serving as a comprehensive guide for modern post-training techniques. Tulu 3, which builds on Llama 3.1 base models, achieves results surpassing the instruct versions of Llama 3.1, Qwen 2.5, Mistral, and even closed models such as GPT-4o-mini and Claude 3.5-Haiku. The training algorithms for our models include supervised finetuning (SFT), Direct Preference Optimization (DPO), and a novel method we call Reinforcement Learning with Verifiable Rewards (RLVR). With Tulu 3, we introduce a multi-task evaluation scheme for post-training recipes with development and unseen evaluations, standard benchmark implementations, and substantial decontamination of existing open datasets on said benchmarks. We conclude with analysis and discussion of training methods that did not reliably improve performance. In addition to the Tulu 3 model weights and demo, we release the complete recipe -- including datasets for diverse core skills, a robust toolkit for data curation and evaluation, the training code and infrastructure, and, most importantly, a detailed report for reproducing and further adapting the Tulu 3 approach to more domains.
翻译:语言模型后训练被应用于精炼行为并解锁近期众多语言模型的新技能,但应用这些技术的开放方案落后于专有方案。后训练的基础训练数据和方案,既是整个拼图中最重要的部分,也是透明度最低的部分。为弥合这一差距,我们推出了Tulu 3,一个完全开放、性能领先的后训练模型系列,并同时公开其数据、代码和训练方案,旨在为现代后训练技术提供全面指南。Tulu 3基于Llama 3.1基础模型构建,其成果超越了Llama 3.1、Qwen 2.5、Mistral的指令调优版本,甚至超过了GPT-4o-mini和Claude 3.5-Haiku等闭源模型。我们模型的训练算法包括监督微调(SFT)、直接偏好优化(DPO),以及一种我们称之为“基于可验证奖励的强化学习”(RLVR)的新方法。借助Tulu 3,我们引入了一套针对后训练方案的多任务评估框架,包含开发集与未见集评估、标准基准测试实现,以及对现有开放数据集在所述基准上的大规模去污染处理。最后,我们对未能稳定提升性能的训练方法进行了分析与讨论。除了Tulu 3模型权重和演示,我们还发布了完整的方案——包括针对多样化核心技能的数据集、一个用于数据整理与评估的健壮工具包、训练代码与基础设施,以及最重要的,一份用于复现并进一步将Tulu 3方法适配至更多领域的详细报告。