Long-range sequence processing poses a significant challenge for Transformers due to their quadratic complexity in input length. A promising alternative is Mamba, which demonstrates high performance and achieves Transformer-level capabilities while requiring substantially fewer computational resources. In this paper we explore the length-generalization capabilities of Mamba, which we find to be relatively limited. Through a series of visualizations and analyses we identify that the limitations arise from a restricted effective receptive field, dictated by the sequence length used during training. To address this constraint, we introduce DeciMamba, a context-extension method specifically designed for Mamba. This mechanism, built on top of a hidden filtering mechanism embedded within the S6 layer, enables the trained model to extrapolate well even without additional training. Empirical experiments over real-world long-range NLP tasks show that DeciMamba can extrapolate to context lengths that are 25x times longer than the ones seen during training, and does so without utilizing additional computational resources. We will release our code and models.
翻译:长序列处理对Transformer模型构成了重大挑战,这源于其计算复杂度随输入长度呈二次方增长。Mamba作为一种有前景的替代方案,在显著减少计算资源需求的同时,展现出卓越的性能并达到了与Transformer相当的能力水平。本文深入探究了Mamba的长度泛化能力,发现其相对有限。通过一系列可视化分析与实验,我们指出该局限性源于训练阶段所用序列长度所限定的有效感受野范围。为突破此约束,我们提出了DeciMamba——一种专为Mamba设计的上下文扩展方法。该机制基于S6层内嵌的隐藏滤波机制构建,使训练完成的模型无需额外训练即可实现良好的外推性能。在现实世界长程自然语言处理任务上的实证实验表明,DeciMamba能够将上下文长度外推至训练时所见长度的25倍,且无需消耗额外计算资源。我们将公开相关代码与模型。