Large language models solve complex tasks by generating long reasoning chains, achieving higher accuracy at the cost of increased computational cost and reduced ability to isolate functionally relevant reasoning. Prior work on compact reasoning shortens such chains through probabilistic sampling, heuristics, or supervision from frontier models, but offers limited insight into whether models internally encode token-level functional importance for answer generation. We address this gap diagnostically and propose greedy pruning, a likelihood-preserving deletion procedure that iteratively removes reasoning tokens whose removal minimally degrades model likelihood under a specified objective, yielding length-controlled reasoning chains. We evaluate pruned reasoning in a distillation framework and show that students trained on pruned chains outperform a frontier-model-supervised compression baseline at matched reasoning lengths. Finally, our analysis reveals systematic pruning patterns and shows that attention scores can predict greedy pruning ranks, further suggesting that models encode a nontrivial functional importance structure over reasoning tokens.
翻译:大型语言模型通过生成长推理链来解决复杂任务,以更高的计算成本和降低隔离功能相关推理的能力为代价实现更高的准确性。先前关于紧凑推理的工作通过概率抽样、启发式方法或前沿模型的监督来缩短此类推理链,但对于模型内部是否编码了用于答案生成的标记级功能重要性,提供的见解有限。我们通过诊断方法解决这一空白,并提出贪婪剪枝——一种保持似然性的删除过程,迭代地移除那些在指定目标下移除时对模型似然性影响最小的推理标记,从而产生长度可控的推理链。我们在蒸馏框架中评估剪枝后的推理链,结果表明,在匹配的推理长度下,使用剪枝链训练的学生模型优于前沿模型监督的压缩基线。最后,我们的分析揭示了系统性的剪枝模式,并表明注意力分数可以预测贪婪剪枝的排序,这进一步表明模型在推理标记上编码了一种非平凡的功能重要性结构。