Chain-of-Thought (CoT) and Looped Transformers have been shown to empirically improve performance on reasoning tasks and to theoretically enhance expressivity by recursively increasing the number of computational steps. However, their comparative capabilities are still not well understood. In this paper, we provide a formal analysis of their respective strengths and limitations. We show that Looped Transformers can efficiently simulate parallel computations for deterministic tasks, which we formalize as evaluation over directed acyclic graphs. In contrast, CoT with stochastic decoding excels at approximate inference for compositional structures, namely self-reducible problems. These separations suggest the tasks for which depth-driven recursion is more suitable, thereby offering practical cues for choosing between reasoning paradigms.
翻译:思维链(CoT)与循环Transformer已被实证证明能够提升推理任务的性能,并在理论上通过递归增加计算步骤来增强表达能力。然而,它们的相对能力仍未得到充分理解。本文对它们各自的优势与局限进行了形式化分析。我们证明,循环Transformer能够高效模拟确定性任务的并行计算,我们将其形式化为有向无环图上的求值过程。相比之下,采用随机解码的思维链在组合结构(即自可归约问题)的近似推理方面表现优异。这些分离结果表明了哪些任务更适合深度驱动的递归方法,从而为选择推理范式提供了实践线索。