Recently, slow-thinking reasoning systems, such as o1, have demonstrated remarkable capabilities in solving complex reasoning tasks. These systems typically engage in an extended thinking process before responding to a query, allowing them to generate more thorough, accurate, and well-reasoned solutions. These systems are primarily developed and maintained by industry, with their core techniques not publicly disclosed. In response, an increasing number of studies from the research community aim to explore the technical foundations underlying these powerful reasoning systems. Building on these prior efforts, this paper presents a reproduction report on implementing o1-like reasoning systems. We introduce an "imitate, explore, and self-improve" framework as our primary technical approach to train the reasoning model. In the initial phase, we use distilled long-form thought data to fine-tune the reasoning model, enabling it to invoke a slow-thinking mode. The model is then encouraged to explore challenging problems by generating multiple rollouts, which can result in increasingly more high-quality trajectories that lead to correct answers. Furthermore, the model undergoes self-improvement by iteratively refining its training dataset. To verify the effectiveness of this approach, we conduct extensive experiments on three challenging benchmarks. The experimental results demonstrate that our approach achieves competitive performance compared to industry-level reasoning systems on these benchmarks.
翻译:近年来,慢思考推理系统(如o1)在解决复杂推理任务中展现出卓越能力。这类系统通常在响应查询前进行长时间思考,从而能够生成更全面、准确且逻辑严密的解决方案。此类系统主要由工业界开发维护,其核心技术尚未公开。为此,研究界日益增多的研究致力于探索这些强大推理系统的技术基础。基于前期研究成果,本文提出一种类o1推理系统的复现报告。我们引入"模仿、探索与自我改进"框架作为训练推理模型的核心技术路径:在初始阶段,利用蒸馏得到的长形式思维数据对推理模型进行微调,使其能够调用慢思考模式;随后鼓励模型通过生成多轮推演来探索难题,从而逐步产生更多能导向正确答案的高质量轨迹;此外,模型通过迭代优化训练数据集实现自我改进。为验证该方法的有效性,我们在三个具有挑战性的基准测试上进行了广泛实验。结果表明,相较于工业级推理系统,我们的方法在这些基准测试中取得了具有竞争力的性能。