Recent advances in Large Language Models (LLMs) have demonstrated promising performance in sequential recommendation tasks, leveraging their superior language understanding capabilities. However, existing LLM-based recommendation approaches predominantly focus on modeling item-level co-occurrence patterns while failing to adequately capture user-level personalized preferences. This is problematic since even users who display similar behavioral patterns (e.g., clicking or purchasing similar items) may have fundamentally different underlying interests. To alleviate this problem, in this paper, we propose ULMRec, a framework that effectively integrates user personalized preferences into LLMs for sequential recommendation. Considering there has the semantic gap between item IDs and LLMs, we replace item IDs with their corresponding titles in user historical behaviors, enabling the model to capture the item semantics. For integrating the user personalized preference, we design two key components: (1) user indexing: a personalized user indexing mechanism that leverages vector quantization on user reviews and user IDs to generate meaningful and unique user representations, and (2) alignment tuning: an alignment-based tuning stage that employs comprehensive preference alignment tasks to enhance the model's capability in capturing personalized information. Through this design, ULMRec achieves deep integration of language semantics with user personalized preferences, facilitating effective adaptation to recommendation. Extensive experiments on two public datasets demonstrate that ULMRec significantly outperforms existing methods, validating the effectiveness of our approach.
翻译:近年来,大语言模型在序列推荐任务中展现出优异性能,这得益于其卓越的语言理解能力。然而,现有基于大语言模型的推荐方法主要聚焦于建模物品层面的共现模式,未能充分捕捉用户层面的个性化偏好。这一问题尤为关键,因为即使表现出相似行为模式(例如点击或购买相似物品)的用户,其内在兴趣也可能存在本质差异。为缓解此问题,本文提出ULMRec框架,该框架能够有效将用户个性化偏好整合至大语言模型中,以实现序列推荐。考虑到物品ID与大语言模型之间存在语义鸿沟,我们将用户历史行为中的物品ID替换为对应标题,使模型能够捕捉物品语义信息。为整合用户个性化偏好,我们设计两个核心组件:(1)用户索引:通过向量量化技术对用户评论与用户ID进行处理,生成具有语义意义且唯一的用户表征的个性化索引机制;(2)对齐微调:采用基于多维度偏好对齐任务的调优阶段,以增强模型捕捉个性化信息的能力。通过此设计,ULMRec实现了语言语义与用户个性化偏好的深度融合,从而有效适配推荐任务。在两个公开数据集上的大量实验表明,ULMRec显著优于现有方法,验证了本方法的有效性。