Large reasoning models (LRMs), such as OpenAI o1 and DeepSeek-R1, have significantly enhanced their reasoning capabilities by generating longer chains of thought, demonstrating outstanding performance across a variety of tasks. However, this performance gain comes at the cost of a substantial increase in redundant reasoning during the generation process, leading to high computational overhead and exacerbating the issue of overthinking. Although numerous existing approaches aim to address the problem of overthinking, they often rely on external interventions. In this paper, we propose a novel framework, Self-Braking Tuning (SBT), which tackles overthinking from the perspective of allowing the model to regulate its own reasoning process, thus eliminating the reliance on external control mechanisms. We construct a set of overthinking identification metrics based on standard answers and design a systematic method to detect redundant reasoning. This method accurately identifies unnecessary steps within the reasoning trajectory and generates training signals for learning self-regulation behaviors. Building on this foundation, we develop a complete strategy for constructing data with adaptive reasoning lengths and introduce an innovative braking prompt mechanism that enables the model to naturally learn when to terminate reasoning at an appropriate point. Experiments across mathematical benchmarks (AIME, AMC, MATH500, GSM8K) demonstrate that our method reduces token consumption by up to 60% while maintaining comparable accuracy to unconstrained models.
翻译:大型推理模型(如OpenAI o1和DeepSeek-R1)通过生成更长的思维链显著提升了推理能力,在各种任务中展现出卓越性能。然而,这种性能提升是以生成过程中冗余推理大幅增加为代价的,导致高昂的计算开销并加剧了过度思考问题。尽管现有许多方法试图解决过度思考问题,但它们往往依赖外部干预。本文提出了一种新颖的框架——自制动调优,该框架从允许模型自主调节其推理过程的角度应对过度思考,从而消除对外部控制机制的依赖。我们基于标准答案构建了一套过度思考识别指标,并设计了一种系统化方法来检测冗余推理。该方法能准确识别推理轨迹中不必要的步骤,并为学习自我调节行为生成训练信号。在此基础上,我们开发了完整的自适应推理长度数据构建策略,并引入创新的制动提示机制,使模型能够自然地学会在适当时机终止推理。在数学基准测试(AIME、AMC、MATH500、GSM8K)上的实验表明,我们的方法在保持与无约束模型相当准确度的同时,将令牌消耗降低了最高达60%。