With large language models (LLMs) achieving remarkable breakthroughs in natural language processing (NLP) domains, LLM-enhanced recommender systems have received much attention and have been actively explored currently. In this paper, we focus on adapting and empowering a pure large language model for zero-shot and few-shot recommendation tasks. First and foremost, we identify and formulate the lifelong sequential behavior incomprehension problem for LLMs in recommendation domains, i.e., LLMs fail to extract useful information from a textual context of long user behavior sequence, even if the length of context is far from reaching the context limitation of LLMs. To address such an issue and improve the recommendation performance of LLMs, we propose a novel framework, namely Retrieval-enhanced Large Language models (ReLLa) for recommendation tasks in both zero-shot and few-shot settings. For zero-shot recommendation, we perform semantic user behavior retrieval (SUBR) to improve the data quality of testing samples, which greatly reduces the difficulty for LLMs to extract the essential knowledge from user behavior sequences. As for few-shot recommendation, we further design retrieval-enhanced instruction tuning (ReiT) by adopting SUBR as a data augmentation technique for training samples. Specifically, we develop a mixed training dataset consisting of both the original data samples and their retrieval-enhanced counterparts. We conduct extensive experiments on three real-world public datasets to demonstrate the superiority of ReLLa compared with existing baseline models, as well as its capability for lifelong sequential behavior comprehension. To be highlighted, with only less than 10% training samples, few-shot ReLLa can outperform traditional CTR models that are trained on the entire training set (e.g., DCNv2, DIN, SIM). The code is available \url{https://github.com/LaVieEnRose365/ReLLa}.
翻译:随着大型语言模型(LLM)在自然语言处理(NLP)领域取得显著突破,基于LLM增强的推荐系统已受到广泛关注并正被积极探索。本文聚焦于如何使纯大型语言模型适应并胜任零样本与少样本推荐任务。首先,我们识别并形式化了LLM在推荐领域中存在的终身序列行为理解不足问题,即LLM难以从长用户行为序列的文本上下文中提取有效信息,即使上下文长度远未达到LLM的上下文限制。为解决该问题并提升LLM的推荐性能,我们提出了一种新颖框架——检索增强的大型语言模型(ReLLa),用于零样本与少样本设定下的推荐任务。对于零样本推荐,我们执行语义用户行为检索(SUBR)以提升测试样本的数据质量,这极大降低了LLM从用户行为序列中提取关键知识的难度。对于少样本推荐,我们进一步设计了检索增强的指令微调(ReiT),将SUBR作为训练样本的数据增强技术。具体而言,我们构建了一个混合训练数据集,包含原始数据样本及其检索增强的对应样本。我们在三个真实世界公开数据集上进行了大量实验,以证明ReLLa相较于现有基线模型的优越性,及其在终身序列行为理解方面的能力。值得强调的是,仅使用不到10%的训练样本,少样本ReLLa即可超越在完整训练集上训练的传统CTR模型(例如DCNv2、DIN、SIM)。代码已公开于 \url{https://github.com/LaVieEnRose365/ReLLa}。