Looping, reusing a block of layers across depth, and depth growing, training shallow-to-deep models by duplicating middle layers, have both been linked to stronger reasoning, but their relationship remains unclear. We provide a mechanistic unification: looped and depth-grown models exhibit convergent depth-wise signatures, including increased reliance on late layers and recurring patterns aligned with the looped or grown block. These shared signatures support the view that their gains stem from a common form of iterative computation. Building on this connection, we show that the two techniques are adaptable and composable: applying inference-time looping to the middle blocks of a depth-grown model improves accuracy on some reasoning primitives by up to $2\times$, despite the model never being trained to loop. Both approaches also adapt better than the baseline when given more in-context examples or additional supervised fine-tuning data. Additionally, depth-grown models achieve the largest reasoning gains when using higher-quality, math-heavy cooldown mixtures, which can be further boosted by adapting a middle block to loop. Overall, our results position depth growth and looping as complementary, practical methods for inducing and scaling iterative computation to improve reasoning.
翻译:循环(在深度上重复使用层块)与深度增长(通过复制中间层训练浅层到深层模型)均被证实能增强推理能力,但二者关系尚不明确。我们提出了一种机制上的统一:循环模型与深度增长模型展现出收敛的深度特征,包括对深层依赖的增强,以及与循环或增长块对齐的重复模式。这些共同特征支持了以下观点:两者的性能提升源于一种共同的迭代计算形式。基于这一关联,我们证明这两种技术具有适应性与可组合性:对深度增长模型的中间块应用推理时循环,可将某些推理原语的准确率提升高达$2\times$,尽管该模型从未接受过循环训练。在提供更多上下文示例或额外监督微调数据时,两种方法的适应能力也优于基线。此外,深度增长模型在使用更高质量、数学密集型的冷却混合数据时获得最大的推理提升,而通过调整中间块进行循环可进一步放大该效果。总体而言,我们的研究将深度增长与循环定位为互补且实用的方法,可用于引导和扩展迭代计算以提升推理能力。