We study how individual training examples shape the internal computation of looped transformers, where a shared block is applied for $τ$ recurrent iterations to enable latent reasoning. Existing training-data influence estimators such as TracIn yield a single scalar score that aggregates over all loop iterations, obscuring when during the recurrent computation a training example matters. We introduce \textit{Step-Decomposed Influence (SDI)}, which decomposes TracIn into a length-$τ$ influence trajectory by unrolling the recurrent computation graph and attributing influence to specific loop iterations. To make SDI practical at transformer scale, we propose a TensorSketch implementation that never materialises per-example gradients. Experiments on looped GPT-style models and algorithmic reasoning tasks show that SDI scales excellently, matches full-gradient baselines with low error and supports a broad range of data attribution and interpretability tasks with per-step insights into the latent reasoning process.
翻译:本研究探讨了单个训练样本如何影响循环Transformer的内部计算过程。循环Transformer通过将共享模块应用于$τ$次循环迭代来实现潜在推理。现有的训练数据影响力估计方法(如TracIn)仅生成聚合所有循环迭代的单一标量分数,这模糊了训练样本在循环计算过程中何时产生影响。我们提出了\textit{步分解影响力(SDI)}方法,通过展开循环计算图并将影响力归因于特定循环迭代,将TracIn分解为长度为$τ$的影响力轨迹。为使SDI能在Transformer规模上实际应用,我们提出了无需具体化逐样本梯度的TensorSketch实现方案。在循环GPT风格模型和算法推理任务上的实验表明:SDI具有优异的可扩展性,能以较低误差匹配全梯度基线,并支持广泛的数据归因与可解释性任务,为潜在推理过程提供逐步洞察。