Despite their successes, deep learning models struggle with tasks requiring complex reasoning and function composition. We present a theoretical and empirical investigation into the limitations of Structured State Space Models (SSMs) and Transformers in such tasks. We prove that one-layer SSMs cannot efficiently perform function composition over large domains without impractically large state sizes, and even with Chain-of-Thought prompting, they require a number of steps that scale unfavorably with the complexity of the function composition. Multi-layer SSMs are constrained by log-space computational capacity, limiting their reasoning abilities. Our experiments corroborate these theoretical findings. Evaluating models on tasks including various function composition settings, multi-digit multiplication, dynamic programming, and Einstein's puzzle, we find significant performance degradation even with advanced prompting techniques. Models often resort to shortcuts, leading to compounding errors. These findings highlight fundamental barriers within current deep learning architectures rooted in their computational capacities. We underscore the need for innovative solutions to transcend these constraints and achieve reliable multi-step reasoning and compositional task-solving, which is critical for advancing toward general artificial intelligence.
翻译:尽管深度学习模型取得了诸多成功,但在需要复杂推理与函数组合的任务上仍面临困难。本文从理论与实验两方面探究了结构化状态空间模型与Transformer在此类任务中的局限性。我们证明,单层SSM无法在状态规模不切实际增大的情况下高效执行大定义域上的函数组合;即使采用思维链提示,其所需步骤数仍会随函数组合的复杂度而急剧增加。多层SSM受限于对数空间计算能力,这制约了其推理性能。实验数据验证了这些理论发现。通过在多种函数组合场景、多位数乘法、动态规划及爱因斯坦谜题等任务上评估模型,我们发现即使采用先进的提示技术,模型性能仍出现显著下降。模型常依赖捷径导致误差累积。这些发现揭示了当前深度学习架构因其计算能力限制而存在的根本性障碍。我们强调需要创新性解决方案以突破这些约束,实现可靠的多步推理与组合任务求解,这对推进通用人工智能发展至关重要。