We introduce a simple, yet novel entropy-based framework to drive token efficiency in large language models during reasoning tasks. Our approach uses Shannon entropy from token-level logprobs as a confidence signal to enable early stopping, achieving 25-50% computational savings while maintaining task accuracy. Crucially, we demonstrate that entropy-based confidence calibration represents an emergent property of advanced post-training optimization present in modern reasoning models but notably absent in standard instruction-tuned and pre-trained models (Llama 3.3 70B). We show that the entropy threshold to stop reasoning varies from model to model but can be calculated easily in one shot using only a few examples from existing reasoning datasets. Our results indicate that advanced reasoning models often know that they've gotten a correct answer early on, and that this emergent confidence awareness can be exploited to save tokens and reduce latency. The framework demonstrates consistent performance across reasoning-optimized model families with 25-50% computational cost reduction while preserving accuracy, revealing that confidence mechanisms represent a distinguishing characteristic of modern post-trained reasoning systems versus their predecessors.
翻译:我们提出了一种新颖而简单的基于熵的框架,用于提升大语言模型在推理任务中的标记效率。该方法利用标记级对数概率的香农熵作为置信度信号,实现早期停止推理,在保持任务准确率的同时节省25-50%的计算开销。关键发现表明,基于熵的置信度校准是现代推理模型通过后训练优化涌现的特性,而在标准指令微调模型及预训练模型(如Llama 3.3 70B)中显著缺失。研究表明,停止推理的熵阈值因模型而异,但仅需使用现有推理数据集的少量样本即可一次性快速计算。结果表明,先进推理模型常在早期阶段已感知到获得正确答案,这种涌现的置信度认知可用于节约标记数量并降低延迟。该框架在推理优化模型系列中展现出一致性能,在保持准确率的同时实现25-50%的计算成本降低,揭示了置信度机制是现代后训练推理系统区别于前代模型的重要特征。