Large Reasoning Models (LRMs) allocate substantial inference-time compute to Chain-of-Thought (CoT) reasoning, improving performance on mathematics, scientific QA, and tool usage. However, this introduces overthinking: LRMs often reach a correct intermediate solution, continue reasoning, and overwrite it with an incorrect answer. We first demonstrate that oracle stopping--where we inject </think> at every sentence boundary and select the best stopping point in hindsight--improves average accuracy by 8\% while reducing thinking tokens by 72\%, exposing substantial overthinking. Motivated by this finding, we propose ThinkBrake, which monitors the log-probability margin between the top continuation token and </think> at sentence boundaries, stopping reasoning when this margin narrows. ThinkBrake requires no training and achieves favorable accuracy-efficiency trade-offs across math, scientific QA, and tool usage benchmarks, reducing thinking token usage by up to 30\%. Furthermore, we provide theoretical analysis showing that ThinkBrake is equivalent to test-time realignment with a reward bonus for the </think> token.
翻译:大型推理模型(LRMs)为思维链(CoT)推理分配了大量推理时间计算资源,从而在数学、科学问答和工具使用任务上提升了性能。然而,这引入了过度思考问题:LRMs经常得出一个正确的中间解决方案后,继续推理,并用一个错误的答案覆盖它。我们首先证明,通过神谕式停止——即在每个句子边界处注入</think>标记,并在事后选择最佳停止点——可将平均准确率提升8%,同时将思考令牌数量减少72%,这揭示了显著的过度思考现象。受此发现启发,我们提出了ThinkBrake方法,该方法监控句子边界处最高延续令牌与</think>标记之间的对数概率差值,当该差值缩小时停止推理。ThinkBrake无需训练,并在数学、科学问答和工具使用基准测试中实现了有利的准确率-效率权衡,将思考令牌使用量减少了高达30%。此外,我们提供的理论分析表明,ThinkBrake等价于对</think>标记给予奖励加分的测试时对齐方法。