State-space models (SSMs) have emerged as a potential alternative architecture for building large language models (LLMs) compared to the previously ubiquitous transformer architecture. One theoretical weakness of transformers is that they cannot express certain kinds of sequential computation and state tracking (Merrill & Sabharwal, 2023), which SSMs are explicitly designed to address via their close architectural similarity to recurrent neural networks (RNNs). But do SSMs truly have an advantage (over transformers) in expressive power for state tracking? Surprisingly, the answer is no. Our analysis reveals that the expressive power of SSMs is limited very similarly to transformers: SSMs cannot express computation outside the complexity class $\mathsf{TC}^0$. In particular, this means they cannot solve simple state-tracking problems like permutation composition. It follows that SSMs are provably unable to accurately track chess moves with certain notation, evaluate code, or track entities in a long narrative. To supplement our formal analysis, we report experiments showing that Mamba-style SSMs indeed struggle with state tracking. Thus, despite its recurrent formulation, the "state" in an SSM is an illusion: SSMs have similar expressiveness limitations to non-recurrent models like transformers, which may fundamentally limit their ability to solve real-world state-tracking problems.
翻译:状态空间模型(SSMs)已成为构建大型语言模型(LLMs)的潜在替代架构,相较于此前普遍使用的Transformer架构。Transformer的一个理论弱点是其无法表达某些类型的序列计算和状态跟踪(Merrill & Sabharwal, 2023),而SSM通过其与递归神经网络(RNNs)的紧密架构相似性被明确设计为解决该问题。但SSM在状态跟踪的表示能力上(相对于Transformer)是否真正具有优势?令人惊讶的是,答案是否定的。我们的分析表明,SSM的表示能力与Transformer受到高度相似的制约:SSM无法表达复杂度类$\mathsf{TC}^0$之外的计算。具体而言,这意味着它们无法解决如排列组合等简单状态跟踪问题。由此可证,SSM无法准确跟踪特定记谱法的国际象棋走法、评估代码或追踪长篇叙事中的实体。为补充形式化分析,我们通过实验表明Mamba风格的SSM确实在状态跟踪上存在困难。因此,尽管采用递归形式,SSM中的"状态"实为一种幻觉:SSM与Transformer等非递归模型具有相似的表达力局限性,这可能从根本上限制其解决现实世界状态跟踪问题的能力。