Structured State-Space Duality (SSD) [Dao & Gu, ICML 2024] is an equivalence between a simple Structured State-Space Model (SSM) and a masked attention mechanism. In particular, a state-space model with a scalar-times-identity state matrix is equivalent to a masked self-attention with a $1$-semiseparable causal mask. Consequently, the same sequence transformation (model) has two algorithmic realizations: as a linear-time $O(T)$ recurrence or as a quadratic-time $O(T^2)$ attention. In this note, we formalize and generalize this duality: (i) we extend SSD from the scalar-identity case to general diagonal SSMs (diagonal state matrices); (ii) we show that these diagonal SSMs match the scalar case's training complexity lower bounds while supporting richer dynamics; (iii) we establish a necessary and sufficient condition under which an SSM is equivalent to $1$-semiseparable masked attention; and (iv) we show that such duality fails to extend to standard softmax attention due to rank explosion. Together, these results tighten bridge between recurrent SSMs and Transformers, and widen the design space for expressive yet efficient sequence models.
翻译:结构化状态空间对偶性(SSD)[Dao & Gu, ICML 2024] 是一种简单的结构化状态空间模型(SSM)与掩码注意力机制之间的等价关系。具体而言,具有标量乘单位阵状态矩阵的状态空间模型等价于具有 $1$-半可分因果掩码的掩码自注意力机制。因此,相同的序列变换(模型)具有两种算法实现形式:作为线性时间 $O(T)$ 的递归,或作为二次时间 $O(T^2)$ 的注意力机制。在本笔记中,我们形式化并推广了这一对偶性:(i)将 SSD 从标量-单位阵情形推广至一般对角 SSM(对角状态矩阵);(ii)证明这些对角 SSM 在保持标量情形训练复杂度下界的同时,能够支持更丰富的动态特性;(iii)建立 SSM 等价于 $1$-半可分掩码注意力的充分必要条件;(iv)证明由于秩爆炸现象,此类对偶性无法推广至标准 softmax 注意力机制。这些结果共同强化了递归 SSM 与 Transformer 之间的理论桥梁,并为设计兼具表达力与计算效率的序列模型拓宽了探索空间。