Multimodal foundation models are transformative in sequential recommender systems, leveraging powerful representation learning capabilities. While Parameter-efficient Fine-tuning (PEFT) is commonly used to adapt foundation models for recommendation tasks, most research prioritizes parameter efficiency, often overlooking critical factors like GPU memory efficiency and training speed. Addressing this gap, our paper introduces IISAN (Intra- and Inter-modal Side Adapted Network for Multimodal Representation), a simple plug-and-play architecture using a Decoupled PEFT structure and exploiting both intra- and inter-modal adaptation. IISAN matches the performance of full fine-tuning (FFT) and state-of-the-art PEFT. More importantly, it significantly reduces GPU memory usage - from 47GB to just 3GB for multimodal sequential recommendation tasks. Additionally, it accelerates training time per epoch from 443s to 22s compared to FFT. This is also a notable improvement over the Adapter and LoRA, which require 37-39 GB GPU memory and 350-380 seconds per epoch for training. Furthermore, we propose a new composite efficiency metric, TPME (Training-time, Parameter, and GPU Memory Efficiency) to alleviate the prevalent misconception that "parameter efficiency represents overall efficiency". TPME provides more comprehensive insights into practical efficiency comparisons between different methods. Besides, we give an accessible efficiency analysis of all PEFT and FFT approaches, which demonstrate the superiority of IISAN. We release our codes and other materials at https://github.com/GAIR-Lab/IISAN.
翻译:多模态基础模型凭借其强大的表示学习能力,正在变革序列推荐系统。尽管参数高效微调(PEFT)常被用于使基础模型适应推荐任务,但现有研究大多侧重于参数效率,往往忽视了GPU内存效率与训练速度等关键因素。为填补这一空白,本文提出IISAN(面向多模态表示的内模态与跨模态侧向自适应网络),这是一种基于解耦PEFT结构的即插即用架构,同时利用内模态与跨模态自适应机制。IISAN在性能上媲美全参数微调(FFT)及当前最优的PEFT方法。更重要的是,其显著降低了GPU内存占用——在多模态序列推荐任务中从47GB大幅降至仅3GB。相较于FFT,每个训练轮次的耗时也从443秒缩短至22秒。相较于需要37-39GB GPU内存及每轮次350-380秒训练时间的Adapter与LoRA方法,这也是显著的提升。此外,我们提出了新的复合效率度量指标TPME(训练时间、参数与GPU内存综合效率),以缓解当前普遍存在的“参数效率即代表整体效率”的认知偏差。TPME能为不同方法间的实际效率比较提供更全面的评估视角。同时,我们对所有PEFT与FFT方法进行了直观的效率分析,结果验证了IISAN的优越性。相关代码与资料已发布于https://github.com/GAIR-Lab/IISAN。