A central challenge in sequence modeling is efficiently handling tasks with extended contexts. While recent state-space models (SSMs) have made significant progress in this area, they often lack input-dependent filtering or require substantial increases in model complexity to handle input variability. We address this gap by introducing S7, a simplified yet powerful SSM that can handle input dependence while incorporating stable reparameterization and specific design choices to dynamically adjust state transitions based on input content, maintaining efficiency and performance. We prove that this reparameterization ensures stability in long-sequence modeling by keeping state transitions well-behaved over time. Additionally, it controls the gradient norm, enabling efficient training and preventing issues like exploding or vanishing gradients. S7 significantly outperforms baselines across various sequence modeling tasks, including neuromorphic event-based datasets, Long Range Arena benchmarks, and various physical and biological time series. Overall, S7 offers a more straightforward approach to sequence modeling without relying on complex, domain-specific inductive biases, achieving significant improvements across key benchmarks.
翻译:序列建模中的一个核心挑战在于高效处理具有扩展上下文的任务。尽管近期状态空间模型(SSMs)在该领域取得了显著进展,但它们往往缺乏输入依赖性滤波,或需要大幅增加模型复杂度以处理输入变化性。为弥补这一不足,我们提出S7——一种简化而强大的SSM,能够处理输入依赖性,同时通过稳定的重参数化与特定设计选择,根据输入内容动态调整状态转移,在保持效率与性能的前提下实现动态适应。我们证明该重参数化能通过维持状态转移随时间稳定可控,确保长序列建模的稳定性。此外,该方法能控制梯度范数,实现高效训练并防止梯度爆炸或消失等问题。S7在多种序列建模任务中显著超越基线模型,包括神经形态事件数据集、长程竞技场基准测试以及各类物理与生物时间序列。总体而言,S7为序列建模提供了一种更为简洁的途径,无需依赖复杂的领域特定归纳偏置,即在关键基准测试中实现了显著性能提升。