Sequential recommendation systems predict a user's next item of interest by analyzing past interactions, aligning recommendations with individual preferences. Leveraging the strengths of Large Language Models (LLMs) in knowledge comprehension and reasoning, recent approaches have applied LLMs to sequential recommendation through language generation paradigms. These methods convert user behavior sequences into prompts for LLM fine-tuning, utilizing Low-Rank Adaptation (LoRA) modules to refine recommendations. However, the uniform application of LoRA across diverse user behaviors sometimes fails to capture individual variability, leading to suboptimal performance and negative transfer between disparate sequences. To address these challenges, we propose Instance-wise LoRA (iLoRA), integrating LoRA with the Mixture of Experts (MoE) framework. iLoRA creates a diverse array of experts, each capturing specific aspects of user preferences, and introduces a sequence representation guided gate function. This gate function processes historical interaction sequences to generate enriched representations, guiding the gating network to output customized expert participation weights. This tailored approach mitigates negative transfer and dynamically adjusts to diverse behavior patterns. Extensive experiments on three benchmark datasets demonstrate the effectiveness of iLoRA, highlighting its superior performance compared to existing methods in capturing user-specific preferences and improving recommendation accuracy.
翻译:序列推荐系统通过分析用户历史交互行为来预测其下一个感兴趣的项目,从而使推荐结果与个人偏好保持一致。借助大型语言模型在知识理解与推理方面的优势,近期研究通过语言生成范式将LLMs应用于序列推荐任务。这些方法将用户行为序列转化为提示文本用于LLM微调,并利用低秩适应模块优化推荐效果。然而,在不同用户行为上统一应用LoRA有时难以捕捉个体差异性,导致性能欠佳及不同序列间的负迁移问题。为解决这些挑战,我们提出实例化LoRA,该方法将LoRA与混合专家框架相结合。iLoRA构建了多样化的专家集合,每个专家捕捉用户偏好的特定方面,并引入序列表征引导的门控函数。该门控函数通过处理历史交互序列生成增强表征,指导门控网络输出定制化的专家参与权重。这种定制化方法有效缓解了负迁移现象,并能动态适应多样化的行为模式。在三个基准数据集上的大量实验验证了iLoRA的有效性,其相较于现有方法在捕捉用户特定偏好和提升推荐准确性方面均展现出优越性能。