Sequential recommendation aims to predict users' next interaction with items based on their past engagement sequence. Recently, the advent of Large Language Models (LLMs) has sparked interest in leveraging them for sequential recommendation, viewing it as language modeling. Previous studies represent items within LLMs' input prompts as either ID indices or textual metadata. However, these approaches often fail to either encapsulate comprehensive world knowledge or exhibit sufficient behavioral understanding. To combine the complementary strengths of conventional recommenders in capturing behavioral patterns of users and LLMs in encoding world knowledge about items, we introduce Large Language-Recommendation Assistant (LLaRA). Specifically, it uses a novel hybrid prompting method that integrates ID-based item embeddings learned by traditional recommendation models with textual item features. Treating the "sequential behaviors of users" as a distinct modality beyond texts, we employ a projector to align the traditional recommender's ID embeddings with the LLM's input space. Moreover, rather than directly exposing the hybrid prompt to LLMs, a curriculum learning strategy is adopted to gradually ramp up training complexity. Initially, we warm up the LLM using text-only prompts, which better suit its inherent language modeling ability. Subsequently, we progressively transition to the hybrid prompts, training the model to seamlessly incorporate the behavioral knowledge from the traditional sequential recommender into the LLM. Empirical results validate the effectiveness of our proposed framework. Codes are available at https://github.com/ljy0ustc/LLaRA.
翻译:序列推荐旨在基于用户过去的行为序列预测其下一次与物品的交互。近期,大型语言模型的出现激发了利用其进行序列推荐(将其视为语言建模任务)的研究兴趣。以往研究在大型语言模型的输入提示中,将物品表示为ID索引或文本元数据。然而,这些方法要么未能封装全面的世界知识,要么缺乏足够的行为理解能力。为融合传统推荐器在捕捉用户行为模式方面的优势与大型语言模型在编码物品世界知识方面的特长,我们提出了大型语言推荐助手(LLaRA)。具体而言,该方法采用一种新颖的混合提示方法,将传统推荐模型学习的基于ID的物品嵌入与文本物品特征相结合。通过将“用户的序列行为”视为文本之外的独立模态,我们使用投影器将传统推荐器的ID嵌入对齐到大型语言模型的输入空间。此外,为避免直接将混合提示暴露给大型语言模型,我们采用课程学习策略逐步提升训练复杂度:首先使用纯文本提示(更契合其固有能力)对大型语言模型进行预热,随后渐进式过渡到混合提示,训练模型将传统序列推荐器的行为知识无缝融入大型语言模型。实验结果表明了我们提出框架的有效性。代码已开源至 https://github.com/ljy0ustc/LLaRA。