We present LrcSSM, a $\textit{non-linear}$ recurrent model that processes long sequences as fast as today's linear state-space layers. By forcing its Jacobian matrix to be diagonal, the full sequence can be solved in parallel, giving $\mathcal{O}(TD)$ time and memory and only $\mathcal{O}(\log T)$ sequential depth, for input-sequence length $T$ and a state dimension $D$. Moreover, LrcSSM offers a formal gradient-stability guarantee that other input-varying systems such as Liquid-S4 and Mamba do not provide. Importantly, the diagonal Jacobian structure of our model results in no performance loss compared to the original model with dense Jacobian, and the approach can be generalized to other non-linear recurrent models, demonstrating broader applicability. On a suite of long-range forecasting tasks, we demonstrate that LrcSSM outperforms Transformers, LRU, S5, and Mamba.
翻译:我们提出了LrcSSM,一种非线性循环模型,其处理长序列的速度与当前线性状态空间层相当。通过强制其雅可比矩阵为对角矩阵,整个序列可以并行求解,对于输入序列长度T和状态维度D,仅需O(TD)的时间和内存以及O(log T)的顺序深度。此外,LrcSSM提供了其他输入变化系统(如Liquid-S4和Mamba)所不具备的形式化梯度稳定性保证。重要的是,与具有稠密雅可比矩阵的原始模型相比,我们模型的对角雅可比结构不会导致性能损失,并且该方法可以推广到其他非线性循环模型,展示了更广泛的适用性。在一系列长程预测任务中,我们证明LrcSSM优于Transformer、LRU、S5和Mamba。