Large Reasoning Models (LRMs) allocate substantial inference-time compute to Chain-of-Thought (CoT) reasoning, improving performance on mathematics, scientific QA, and tool usage. However, this introduces overthinking: LRMs often reach a correct intermediate solution, continue reasoning, and overwrite it with an incorrect answer. We first demonstrate that oracle stopping--where we inject </think> at every sentence boundary and select the best stopping point in hindsight--improves average accuracy by 8% while reducing thinking tokens by 72%, exposing substantial overthinking. Motivated by this finding, we propose ThinkBrake, which monitors the log-probability margin between the top continuation token and </think> at sentence boundaries, stopping reasoning when this margin narrows. ThinkBrake requires no training and achieves favorable accuracy-efficiency trade-offs across math, scientific QA, and tool usage benchmarks, reducing thinking token usage by up to 30%. Furthermore, we provide theoretical analysis showing that ThinkBrake is equivalent to test-time realignment with a reward bonus for the </think> token.
翻译:大型推理模型(LRMs)在推理阶段为思维链(CoT)推理分配了大量计算资源,从而提升了在数学、科学问答及工具使用任务上的性能。然而,这引发了过度思考问题:LRMs常常在得出正确的中间解后继续推理,并用错误答案覆盖了原有正确结果。我们首先证明,若采用理想停止策略——即在每个句子边界处注入</think>标记,并事后选择最佳停止点——可将平均准确率提升8%,同时减少72%的思考标记消耗,这揭示了显著的过度思考现象。基于此发现,我们提出ThinkBrake方法,该方法通过监测句子边界处最高概率延续标记与</think>标记的对数概率差值,当该差值收窄时停止推理。ThinkBrake无需训练即可在数学、科学问答和工具使用基准测试中实现理想的准确率-效率权衡,最高可减少30%的思考标记使用量。此外,我们通过理论分析表明,ThinkBrake等效于对</think>标记施加奖励加成的测试时对齐方法。