In recent years, large language models (LLMs) have demonstrated significant potential in complex reasoning tasks like mathematical problem-solving. However, existing research predominantly relies on reinforcement learning (RL) frameworks while overlooking supervised fine-tuning (SFT) methods. This paper proposes a new two-stage training framework that enhances models' self-correction capabilities through self-generated long chain-of-thought (CoT) data. During the first stage, a multi-turn dialogue strategy guides the model to generate CoT data incorporating verification, backtracking, subgoal decomposition, and backward reasoning, with predefined rules filtering high-quality samples for supervised fine-tuning. The second stage employs a difficulty-aware rejection sampling mechanism to dynamically optimize data distribution, strengthening the model's ability to handle complex problems. The approach generates reasoning chains extended over 4 times longer while maintaining strong scalability, proving that SFT effectively activates models' intrinsic reasoning capabilities and provides a resource-efficient pathway for complex task optimization. Experimental results demonstrate performance improvements on mathematical benchmarks including GSM8K and MATH500, with the fine-tuned model achieving a substantial improvement on competition-level problems like AIME24. Code will be open-sourced.
翻译:近年来,大语言模型(LLMs)在数学问题求解等复杂推理任务中展现出显著潜力。然而,现有研究主要依赖强化学习(RL)框架,而忽视了监督微调(SFT)方法。本文提出一种新的两阶段训练框架,通过模型自生成的长思维链(CoT)数据增强其自我修正能力。在第一阶段,采用多轮对话策略引导模型生成包含验证、回溯、子目标分解及逆向推理的思维链数据,并通过预定义规则筛选高质量样本用于监督微调。第二阶段采用难度感知拒绝采样机制动态优化数据分布,以强化模型处理复杂问题的能力。该方法生成的推理链长度扩展至4倍以上,同时保持强可扩展性,证明监督微调能有效激活模型的内在推理能力,并为复杂任务优化提供了资源高效的路径。实验结果表明,该方法在GSM8K和MATH500等数学基准上取得性能提升,微调后的模型在AIME24等竞赛级问题上实现了显著改进。代码将开源发布。