An ecosystem of Transformer-based models has been established by building large models with extensive data. Parameter-efficient fine-tuning (PEFT) is a crucial technology for deploying these models to downstream tasks with minimal cost while achieving effective performance. Recently, Mamba, a State Space Model (SSM)-based model, has attracted attention as a potential alternative to Transformers. While many large-scale Mamba-based models have been proposed, efficiently adapting pre-trained Mamba-based models to downstream tasks remains unexplored. In this paper, we conduct an exploratory analysis of PEFT methods for Mamba. We investigate the effectiveness of existing PEFT methods for Transformers when applied to Mamba. We also modify these methods to better align with the Mamba architecture. Additionally, we propose new Mamba-specific PEFT methods that leverage the distinctive structure of Mamba. Our experiments indicate that PEFT performs more effectively for Mamba than Transformers. Lastly, we demonstrate how to effectively combine multiple PEFT methods and provide a framework that outperforms previous works. To ensure reproducibility, we will release the code after publication.
翻译:通过利用海量数据构建大型模型,基于Transformer的模型生态系统已经建立。参数高效微调(PEFT)是一项关键技术,能够以最小成本将这些模型部署到下游任务中,同时实现有效性能。最近,基于状态空间模型(SSM)的Mamba模型作为Transformer的潜在替代方案引起了广泛关注。尽管已经提出了许多基于Mamba的大规模模型,但如何将预训练的Mamba模型高效地适配到下游任务仍未得到充分探索。本文对Mamba的PEFT方法进行了探索性分析。我们研究了现有针对Transformer的PEFT方法应用于Mamba时的有效性,并对这些方法进行了改进以更好地适配Mamba架构。此外,我们提出了利用Mamba独特结构的新型Mamba专用PEFT方法。实验表明,PEFT在Mamba上的表现优于在Transformer上的表现。最后,我们展示了如何有效组合多种PEFT方法,并提供了一个超越先前工作的框架。为确保可复现性,我们将在论文发表后公开代码。