Recent sequence modeling approaches using selective state space sequence models, referred to as Mamba models, have seen a surge of interest. These models allow efficient processing of long sequences in linear time and are rapidly being adopted in a wide range of applications such as language modeling, demonstrating promising performance. To foster their reliable use in real-world scenarios, it is crucial to augment their transparency. Our work bridges this critical gap by bringing explainability, particularly Layer-wise Relevance Propagation (LRP), to the Mamba architecture. Guided by the axiom of relevance conservation, we identify specific components in the Mamba architecture, which cause unfaithful explanations. To remedy this issue, we propose MambaLRP, a novel algorithm within the LRP framework, which ensures a more stable and reliable relevance propagation through these components. Our proposed method is theoretically sound and excels in achieving state-of-the-art explanation performance across a diverse range of models and datasets. Moreover, MambaLRP facilitates a deeper inspection of Mamba architectures, uncovering various biases and evaluating their significance. It also enables the analysis of previous speculations regarding the long-range capabilities of Mamba models.
翻译:近年来,采用选择性状态空间序列模型(称为Mamba模型)的序列建模方法引起了广泛关注。这些模型能够以线性时间高效处理长序列,并正迅速被语言建模等广泛应用所采纳,展现出优异的性能。为了促进其在现实场景中的可靠使用,增强其透明度至关重要。我们的工作通过将可解释性方法,特别是层间相关性传播(LRP),引入Mamba架构,填补了这一关键空白。在相关性守恒公理的指导下,我们识别出Mamba架构中导致解释失真的特定组件。为解决此问题,我们提出了MambaLRP——一种LRP框架内的新算法,可确保相关性在这些组件中更稳定可靠地传播。我们提出的方法在理论上是严谨的,并在多种模型和数据集上实现了最先进的解释性能。此外,MambaLRP促进了对Mamba架构的更深入检视,揭示了各类偏差并评估其重要性,同时也使得对Mamba模型长程能力的先前推测进行分析成为可能。