In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages. In terms of pre-training, we have scaled the high-quality pre-training datasets from the previous 7 trillion tokens to 18 trillion tokens. This provides a strong foundation for common sense, expert knowledge, and reasoning capabilities. In terms of post-training, we implement intricate supervised finetuning with over 1 million samples, as well as multistage reinforcement learning. Post-training techniques enhance human preference, and notably improve long text generation, structural data analysis, and instruction following. To handle diverse and varied use cases effectively, we present Qwen2.5 LLM series in rich sizes. Open-weight offerings include base and instruction-tuned models, with quantized versions available. In addition, for hosted solutions, the proprietary models currently include two mixture-of-experts (MoE) variants: Qwen2.5-Turbo and Qwen2.5-Plus, both available from Alibaba Cloud Model Studio. Qwen2.5 has demonstrated top-tier performance on a wide range of benchmarks evaluating language understanding, reasoning, mathematics, coding, human preference alignment, etc. Specifically, the open-weight flagship Qwen2.5-72B-Instruct outperforms a number of open and proprietary models and demonstrates competitive performance to the state-of-the-art open-weight model, Llama-3-405B-Instruct, which is around 5 times larger. Qwen2.5-Turbo and Qwen2.5-Plus offer superior cost-effectiveness while performing competitively against GPT-4o-mini and GPT-4o respectively. Additionally, as the foundation, Qwen2.5 models have been instrumental in training specialized models such as Qwen2.5-Math, Qwen2.5-Coder, QwQ, and multimodal models.
翻译:本报告介绍了 Qwen2.5,这是一个旨在满足多样化需求的综合性大型语言模型系列。与之前的版本相比,Qwen2.5 在预训练和后训练阶段都得到了显著改进。在预训练方面,我们将高质量预训练数据集从之前的 7 万亿词元扩展到了 18 万亿词元,这为常识、专家知识和推理能力奠定了坚实基础。在后训练方面,我们实施了包含超过 100 万个样本的精细监督微调以及多阶段强化学习。后训练技术增强了对人类偏好的对齐,并显著提升了长文本生成、结构化数据分析和指令遵循能力。为了有效处理多样化的应用场景,我们提供了丰富尺寸的 Qwen2.5 LLM 系列。开源版本包括基础模型和指令调优模型,并提供量化版本。此外,对于托管解决方案,专有模型目前包含两种专家混合变体:Qwen2.5-Turbo 和 Qwen2.5-Plus,两者均可通过阿里云 Model Studio 获取。Qwen2.5 在评估语言理解、推理、数学、编程、人类偏好对齐等广泛基准测试中均展现出顶级性能。具体而言,开源旗舰模型 Qwen2.5-72B-Instruct 的表现优于多个开源和专有模型,并与当前最先进的开源模型 Llama-3-405B-Instruct 性能相当,而后者的参数量约为其 5 倍。Qwen2.5-Turbo 和 Qwen2.5-Plus 在分别与 GPT-4o-mini 和 GPT-4o 竞争时表现出色,同时提供了卓越的性价比。此外,作为基础模型,Qwen2.5 系列在训练诸如 Qwen2.5-Math、Qwen2.5-Coder、QwQ 以及多模态模型等专业模型方面发挥了关键作用。