Video understanding requires the extraction of rich spatio-temporal representations, which transformer models achieve through self-attention. Unfortunately, self-attention poses a computational burden. In NLP, Mamba has surfaced as an efficient alternative for transformers. However, Mamba's successes do not trivially extend to vision tasks, including those in video analysis. In this paper, we theoretically analyze the differences between self-attention and Mamba. We identify two limitations in Mamba's token processing: historical decay and element contradiction. We propose VideoMambaPro (VMP) that solves the identified limitations by adding masked backward computation and elemental residual connections to a VideoMamba backbone. Differently sized VideoMambaPro models surpass VideoMamba by 1.6-2.8% and 1.1-1.9% top-1 on Kinetics-400 and Something-Something V2, respectively. Even without extensive pre-training, our models present an increasingly attractive and efficient alternative to current transformer models. Moreover, our two solutions are orthogonal to recent advances in Vision Mamba models, and are likely to provide further improvements in future models.
翻译:视频理解需要提取丰富的时空表征,Transformer模型通过自注意力机制实现了这一点。然而,自注意力机制带来了沉重的计算负担。在自然语言处理领域,Mamba已成为Transformer的一种高效替代方案。但Mamba的成功并不能简单地扩展到视觉任务,包括视频分析。本文从理论上分析了自注意力与Mamba之间的差异。我们指出了Mamba在令牌处理中的两个局限性:历史衰减与元素矛盾。我们提出了VideoMambaPro(VMP),通过在VideoMamba骨干网络上添加掩码反向计算与元素残差连接,解决了上述局限性。不同规模的VideoMambaPro模型在Kinetics-400和Something-Something V2数据集上的Top-1准确率分别比VideoMamba提升了1.6-2.8%和1.1-1.9%。即使没有进行大规模预训练,我们的模型也为当前Transformer模型提供了一个日益具有吸引力且高效的替代方案。此外,我们提出的两种解决方案与近期Vision Mamba模型的进展是正交的,很可能为未来模型带来进一步的性能提升。