Large Reasoning Models (LRMs) have shown impressive performance on complex reasoning tasks such as mathematics, yet they also display misbehaviors that expose their limitations. In particular, when faced with hard questions, LRMs often engage in unproductive reasoning until context limit, producing wrong answers while wasting substantial computation. This phenomenon reflects a fundamental issue: current answering paradigms overlook the relationship between questions and LRMs' capability boundaries. In this paper, we investigate whether LRMs possess self-awareness of capability boundaries. We begin by an observation that LRMs may know what they cannot solve through expressed reasoning confidence. For black-box models, we find that reasoning expressions reveal boundary signals, with accelerated growing confidence trajectory for solvable problems but convergent uncertainty trajectory for unsolvable ones. For white-box models, we show that hidden states of the last input token encode boundary information, with solvable and unsolvable problems linearly separable even before reasoning begins. Building on these findings, we propose two simple yet effective optimization strategies: reasoning expression monitoring and hidden states monitoring. Experiments demonstrate that these boundary-aware strategies enable LRMs to avoid unproductive reasoning without sacrificing accuracy, significantly improving reliability and efficiency by cutting token usage up to 62.7 - 93.6%.
翻译:大型推理模型(LRMs)在数学等复杂推理任务上展现出令人瞩目的性能,但也表现出一些揭示其局限性的不当行为。具体而言,当面对困难问题时,LRMs常常会进行无益的推理直至达到上下文长度限制,在浪费大量计算资源的同时产生错误答案。这一现象反映了一个根本性问题:当前的问答范式忽视了问题与LRMs能力边界之间的关系。本文旨在探究LRMs是否具备对其能力边界的自我认知。我们首先观察到,LRMs可能通过其表达出的推理置信度来知晓自身无法解决的问题。对于黑盒模型,我们发现推理表达能揭示边界信号:对于可解问题,其置信度轨迹呈现加速增长;而对于不可解问题,其不确定性轨迹则趋于收敛。对于白盒模型,我们证明了最后一个输入词元的隐藏状态编码了边界信息,即使在推理开始之前,可解问题与不可解问题的隐藏状态也是线性可分的。基于这些发现,我们提出了两种简单而有效的优化策略:推理表达监控与隐藏状态监控。实验表明,这些边界感知策略能使LRMs在不牺牲准确性的前提下避免无益的推理,通过将词元使用量削减高达62.7%至93.6%,显著提升了模型的可靠性与效率。