The most fundamental capability of modern AI methods such as Large Language Models (LLMs) is the ability to predict the next token in a long sequence of tokens, known as ``sequence modeling." Although the Transformers model is the current dominant approach to sequence modeling, its quadratic computational cost with respect to sequence length is a significant drawback. State-space models (SSMs) offer a promising alternative due to their linear decoding efficiency and high parallelizability during training. However, existing SSMs often rely on seemingly ad hoc linear recurrence designs. In this work, we explore SSM design through the lens of online learning, conceptualizing SSMs as meta-modules for specific online learning problems. This approach links SSM design to formulating precise online learning objectives, with state transition rules derived from optimizing these objectives. Based on this insight, we introduce a novel deep SSM architecture based on the implicit update for optimizing an online regression objective. Our experimental results show that our models outperform state-of-the-art SSMs, including the Mamba model, on standard sequence modeling benchmarks and language modeling tasks.
翻译:现代人工智能方法(如大语言模型)最核心的能力在于预测长序列中下一个标记,这一过程被称为“序列建模”。尽管Transformer模型是目前序列建模的主流方法,但其计算成本随序列长度呈二次方增长,这是一个显著缺陷。状态空间模型因其线性的解码效率和训练时的高度可并行性,成为一种有前景的替代方案。然而,现有的状态空间模型通常依赖于看似临时的线性递归设计。本研究从在线学习的视角探讨状态空间模型的设计,将其概念化为特定在线学习问题的元模块。这一方法将状态空间模型设计与制定精确的在线学习目标联系起来,状态转移规则则通过优化这些目标推导得出。基于此洞见,我们引入了一种新颖的深度状态空间模型架构,该架构基于隐式更新来优化在线回归目标。实验结果表明,在标准序列建模基准和语言建模任务上,我们的模型性能超越了包括Mamba模型在内的最先进状态空间模型。