Recent advances in Large Language Models (LLMs) have been changing the paradigm of Recommender Systems (RS). However, when items in the recommendation scenarios contain rich textual information, such as product descriptions in online shopping or news headlines on social media, LLMs require longer texts to comprehensively depict the historical user behavior sequence. This poses significant challenges to LLM-based recommenders, such as over-length limitations, extensive time and space overheads, and suboptimal model performance. To this end, in this paper, we design a novel framework for harnessing Large Language Models for Text-Rich Sequential Recommendation (LLM-TRSR). Specifically, we first propose to segment the user historical behaviors and subsequently employ an LLM-based summarizer for summarizing these user behavior blocks. Particularly, drawing inspiration from the successful application of Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) models in user modeling, we introduce two unique summarization techniques in this paper, respectively hierarchical summarization and recurrent summarization. Then, we construct a prompt text encompassing the user preference summary, recent user interactions, and candidate item information into an LLM-based recommender, which is subsequently fine-tuned using Supervised Fine-Tuning (SFT) techniques to yield our final recommendation model. We also use Low-Rank Adaptation (LoRA) for Parameter-Efficient Fine-Tuning (PEFT). We conduct experiments on two public datasets, and the results clearly demonstrate the effectiveness of our approach.
翻译:近期大型语言模型(LLM)的进展正在改变推荐系统(RS)的范式。然而,当推荐场景中的物品包含丰富的文本信息(例如在线购物中的商品描述或社交媒体上的新闻标题)时,LLM需要更长的文本来全面描述历史用户行为序列。这给基于LLM的推荐器带来了重大挑战,如长度限制过大、时间和空间开销巨大以及模型性能欠佳。为此,本文设计了一个新颖的框架,用于利用大型语言模型进行文本丰富的序列推荐(LLM-TRSR)。具体而言,我们首先提出对用户历史行为进行分段,随后采用基于LLM的摘要生成器对这些用户行为模块进行总结。特别地,受卷积神经网络(CNN)和循环神经网络(RNN)模型在用户建模中成功应用的启发,本文引入了两种独特的摘要生成技术,分别是分层摘要和循环摘要。然后,我们构建一个包含用户偏好摘要、近期用户交互和候选物品信息的提示文本,将其输入基于LLM的推荐器,并采用监督微调(SFT)技术进行训练,以得到最终的推荐模型。我们还使用低秩适配(LoRA)进行参数高效微调(PEFT)。我们在两个公开数据集上进行了实验,结果明确展示了我们方法的有效性。