Structured state-space models (SSMs) such as S4, stemming from the seminal work of Gu et al., are gaining popularity as effective approaches for modeling sequential data. Deep SSMs demonstrate outstanding performance across a diverse set of domains, at a reduced training and inference cost compared to attention-based transformers. Recent developments show that if the linear recurrence powering SSMs allows for multiplicative interactions between inputs and hidden states (e.g. GateLoop, Mamba, GLA), then the resulting architecture can surpass in both in accuracy and efficiency attention-powered foundation models trained on text, at scales of billion parameters. In this paper, we give theoretical grounding to this recent finding using tools from Rough Path Theory: we show that when random linear recurrences are equipped with simple input-controlled transitions (selectivity mechanism), then the hidden state is provably a low-dimensional projection of a powerful mathematical object called the signature of the input -- capturing non-linear interactions between tokens at distinct timescales. Our theory not only motivates the success of modern selective state-space models such as Mamba but also provides a solid framework to understand the expressive power of future SSM variants.
翻译:结构化状态空间模型(SSMs),例如源于Gu等人开创性工作的S4,正日益成为序列数据建模的有效方法而受到欢迎。与基于注意力的Transformer相比,深度SSM在多种领域展现出卓越的性能,同时训练和推理成本更低。最新进展表明,如果驱动SSM的线性递归允许输入与隐藏状态之间存在乘法交互(例如GateLoop、Mamba、GLA),那么所得到的架构在数十亿参数规模下,在文本上训练的准确性和效率方面都能超越基于注意力的基础模型。在本文中,我们利用粗糙路径理论工具为这一最新发现提供理论依据:我们证明,当随机线性递归配备简单的输入控制转移(选择性机制)时,隐藏状态可被证明是输入的一个强大数学对象(称为签名)的低维投影——该签名捕获了不同时间尺度上词元之间的非线性交互。我们的理论不仅为现代选择性状态空间模型(如Mamba)的成功提供了动机,也为理解未来SSM变体的表达能力提供了一个坚实的框架。