Deep state-space models (Deep SSMs) have shown capabilities for in-context learning on autoregressive tasks, similar to transformers. However, the architectural requirements and mechanisms enabling this in recurrent networks remain unclear. This study demonstrates that state-space model architectures can perform gradient-based learning and use it for in-context learning. We prove that a single structured state-space model layer, augmented with local self-attention, can reproduce the outputs of an implicit linear model with least squares loss after one step of gradient descent. Our key insight is that the diagonal linear recurrent layer can act as a gradient accumulator, which can be `applied' to the parameters of the implicit regression model. We validate our construction by training randomly initialized augmented SSMs on simple linear regression tasks. The empirically optimized parameters match the theoretical ones, obtained analytically from the implicit model construction. Extensions to multi-step linear and non-linear regression yield consistent results. The constructed SSM encompasses features of modern deep state-space models, with the potential for scalable training and effectiveness even in general tasks. The theoretical construction elucidates the role of local self-attention and multiplicative interactions in recurrent architectures as the key ingredients for enabling the expressive power typical of foundation models.
翻译:深度状态空间模型(Deep SSMs)在自回归任务中已展现出与Transformer类似的上下文学习能力。然而,使循环网络具备这种能力的架构要求与机制仍不明确。本研究证明,状态空间模型架构能够执行基于梯度的学习,并将其用于上下文学习。我们证明,在单层结构化状态空间模型中加入局部自注意力机制后,经过一步梯度下降即可复现具有最小二乘损失的隐式线性模型输出。我们的核心发现是:对角线性循环层可作为梯度累加器,并可将其“施加”于隐式回归模型的参数上。我们通过在简单线性回归任务上训练随机初始化的增强型SSM来验证这一构建。经验优化的参数与通过隐式模型构建解析得到的理论参数完全吻合。该方法扩展到多步线性与非线性回归任务时仍保持结果一致性。所构建的SSM涵盖了现代深度状态空间模型的特征,具备可扩展的训练潜力,即使在通用任务中也能保持有效性。该理论构建阐明了局部自注意力与循环架构中乘法交互作用的关键角色,它们正是实现基础模型典型表达能力的关键要素。