Large reasoning models (LRMs) have demonstrated impressive capabilities in domains like mathematics and program synthesis. Despite their strong performance, LRMs often exhibit overthinking -- excessive and redundant reasoning steps that introduce inefficiencies during inference. This phenomenon raises an important question for LRM self-evaluation: How can a model autonomously assess the correctness of its own reasoning trajectory without external labels? To address this, we propose Chain-of-Reasoning Embedding (CoRE), a series of hidden states in latent space to enable label-free self-evaluation on intermediate reasoning steps of LRMs, so as to enhance metacognition abilities for improved reasoning efficiency. By analyzing the geometric properties of the CoRE trajectories, we reveal that redundant reasoning usually presents cyclical fluctuations, which correspond to repetitive and unconscious reflection/exploration. Leveraging this insight, we further introduce a training-free, label-free self-evaluation framework, CoRE-Eval, to detect such patterns and dynamically determine whether to terminate reasoning early. Extensive experiments on mathematical reasoning benchmarks (GSM8K, MATH-500, and AIME) and across model sizes from 7B to 32B demonstrate that CoRE-Eval reduces chain-of-thought length by 13.7% to 33.2% while improving answer accuracy by around 10%, achieving 70.0% accuracy on the challenging AIME benchmark with the 32B model.
翻译:大推理模型(LRMs)在数学和程序合成等领域已展现出卓越的能力。尽管性能强劲,LRMs常表现出过度思考现象——即推理过程中存在冗余且过度的推理步骤,导致推理效率降低。这一现象引发了大推理模型自评估的重要问题:模型如何在没有外部标签的情况下,自主评估其自身推理轨迹的正确性?为此,我们提出链式推理嵌入(CoRE),这是一种潜在空间中的隐藏状态序列,能够对大推理模型的中间推理步骤进行无标签自评估,从而增强元认知能力以提升推理效率。通过分析CoRE轨迹的几何特性,我们发现冗余推理通常呈现周期性波动,这对应着重复且无意识的反思/探索。基于这一洞见,我们进一步提出了一种免训练、无标签的自评估框架CoRE-Eval,用于检测此类模式并动态决定是否提前终止推理。在数学推理基准测试(GSM8K、MATH-500和AIME)上,针对7B至32B不同规模模型的广泛实验表明,CoRE-Eval将思维链长度减少了13.7%至33.2%,同时将答案准确率提升了约10%,其中32B模型在具有挑战性的AIME基准上达到了70.0%的准确率。