Transformers have demonstrated remarkable capabilities in multi-step reasoning tasks. However, understandings of the underlying mechanisms by which they acquire these abilities through training remain limited, particularly from a theoretical standpoint. This work investigates how transformers learn to solve symbolic multi-step reasoning problems through chain-of-thought processes, focusing on path-finding in trees. We analyze two intertwined tasks: a backward reasoning task, where the model outputs a path from a goal node to the root, and a more complex forward reasoning task, where the model implements two-stage reasoning by first identifying the goal-to-root path and then reversing it to produce the root-to-goal path. Our theoretical analysis, grounded in the dynamics of gradient descent, shows that trained one-layer transformers can provably solve both tasks with generalization guarantees to unseen trees. In particular, our multi-phase training dynamics for forward reasoning elucidate how different attention heads learn to specialize and coordinate autonomously to solve the two subtasks in a single autoregressive path. These results provide a mechanistic explanation of how trained transformers can implement sequential algorithmic procedures. Moreover, they offer insights into the emergence of reasoning abilities, suggesting that when tasks are structured to take intermediate chain-of-thought steps, even shallow multi-head transformers can effectively solve problems that would otherwise require deeper architectures.
翻译:Transformer在多步推理任务中展现出卓越能力。然而,对于其通过训练获得这些能力的底层机制,特别是从理论角度理解,目前仍较为有限。本研究探讨了Transformer如何通过思维链过程学习解决符号多步推理问题,重点关注树结构中的路径查找。我们分析了两个相互交织的任务:一是逆向推理任务,模型输出从目标节点到根节点的路径;二是更复杂的正向推理任务,模型通过先识别目标到根节点的路径,再将其反转以生成根节点到目标节点的路径,从而实现两阶段推理。基于梯度下降动力学理论分析表明,经过训练的单层Transformer可证明解决这两个任务,并对未见过的树结构具有泛化保证。特别地,我们针对正向推理提出的多阶段训练动力学阐明了不同注意力头如何自主地专业化与协作,以在单一自回归路径中解决两个子任务。这些结果为训练后的Transformer如何实现顺序算法过程提供了机制性解释。此外,研究结果揭示了推理能力的涌现机制,表明当任务结构包含中间思维链步骤时,即使是浅层多头Transformer也能有效解决通常需要更深架构的问题。