While recent works (e.g. o1, DeepSeek R1) have demonstrated great promise of using long Chain-of-Thought (CoT) to improve reasoning capabilities of language models, scaling it up during test-time is challenging due to inefficient memory usage -- intermediate computations accumulate indefinitely in context even no longer needed for future thoughts. We propose PENCIL, which incorporates a reduction mechanism into the autoregressive generation process, allowing the model to recursively clean up intermediate thoughts based on patterns learned from training. With this reduction mechanism, PENCIL significantly reduces the maximal context length required during generation, and thus can generate longer thoughts with limited memory, solving larger-scale problems given more thinking time. For example, we demonstrate PENCIL achieves 97\% accuracy on the challenging Einstein's puzzle -- a task even large models like GPT-4 struggle with -- using only a small 25M-parameter transformer with 2048 context length. Theoretically, we prove PENCIL can perform universal space-efficient computation by simulating Turing machines with optimal time and space complexity, and thus can solve arbitrary computational tasks that would otherwise be intractable given context window constraints.
翻译:尽管近期研究(如o1、DeepSeek R1)已证明长链思维(CoT)在提升语言模型推理能力方面具有巨大潜力,但在测试阶段进行扩展仍面临内存使用效率低下的挑战——即使后续思维不再需要,中间计算结果仍会无限累积在上下文中。我们提出PENCIL方法,将约简机制融入自回归生成过程,使模型能够基于训练习得的模式递归清理中间思维。通过该约简机制,PENCIL显著降低了生成过程中所需的最大上下文长度,从而能在有限内存下生成更长思维链,通过增加思考时间解决更大规模问题。例如,我们证明PENCIL仅使用2048上下文长度的小型2500万参数Transformer,就在极具挑战性的爱因斯坦谜题上达到97%的准确率——该任务即使GPT-4等大型模型也难以应对。理论上,我们证明PENCIL可通过模拟图灵机实现最优时间与空间复杂度的通用空间高效计算,从而能够解决在上下文窗口限制下原本难以处理的任意计算任务。