Reasoning LLMs show improved performance with longer chains of thought. However, recent work has highlighted their tendency to overthink, continuing to revise answers even after reaching the correct solution. We quantitatively confirm this inefficiency from the distribution dynamics perspective by tracking Pass@1 for answers averaged over a large number of rollouts and find the model often begins to always produce the correct answer early in the reasoning, making extra reasoning tokens wasteful. To detect and prevent overthinking, we propose a simple and inexpensive novel signal, Entropy After </Think> (EAT), for monitoring and deciding whether to exit reasoning early. By appending a stop thinking token (</think>) and monitoring the entropy of the following token as the model reasons, we obtain a trajectory that decreases and stabilizes when Pass@1 plateaus; thresholding its variance under an exponential moving average yields a practical stopping rule. Importantly, our approach enables adaptively allocating compute based on the EAT trajectory, allowing us to spend compute in a more efficient way compared with fixing the token budget for all questions. Empirically, on MATH500 and AIME2025, EAT reduces token usage by 12 - 22% without harming accuracy. EAT also remains effective in black box settings where logits from the reasoning model are not accessible, and EAT is computed with proxy models: We verified the feasibility via early stopping Llama 70B with a 1.5B model and Claude 3.7 with a local 4B model.
翻译:推理型大语言模型通过更长的思维链展现出性能提升。然而,近期研究指出其存在过度思考倾向,即使在获得正确答案后仍持续修正答案。我们从分布动力学视角定量验证了这种低效性:通过追踪大量推理路径中答案的Pass@1平均值,发现模型常在推理早期就已稳定输出正确答案,导致额外推理标记的浪费。为检测并防止过度思考,我们提出一种新颖、简单且低成本的信号指标——</Think>后熵(EAT),用于监控并决定是否提前终止推理。该方法通过附加停止思考标记(</think>)并监测模型推理过程中后续标记的熵值,获得一条当Pass@1趋于稳定时会下降并收敛的轨迹;对其在指数移动平均下的方差进行阈值判定,即可形成实用的停止规则。重要的是,本方法能基于EAT轨迹自适应分配计算资源,相比为所有问题固定标记预算的方式,实现了更高效的计算资源利用。在MATH500和AIME2025数据集上的实验表明,EAT在保持准确率不变的前提下,可减少12%-22%的标记使用量。EAT在黑盒场景中同样有效:当无法获取推理模型的逻辑值时,可通过代理模型计算EAT。我们已验证了该方案的可行性——使用1.5B模型对Llama 70B进行早退控制,以及使用本地4B模型对Claude 3.7实施早退决策。