Sequential recommendation (SR) tasks aim to predict users' next interaction by learning their behavior sequence and capturing the connection between users' past interactions and their changing preferences. Conventional SR models often focus solely on capturing sequential patterns within the training data, neglecting the broader context and semantic information embedded in item titles from external sources. This limits their predictive power and adaptability. Large language models (LLMs) have recently shown promise in SR tasks due to their advanced understanding capabilities and strong generalization abilities. Researchers have attempted to enhance LLMs-based recommendation performance by incorporating information from conventional SR models. However, previous approaches have encountered problems such as 1) limited textual information leading to poor recommendation performance, 2) incomplete understanding and utilization of conventional SR model information by LLMs, and 3) excessive complexity and low interpretability of LLMs-based methods. To improve the performance of LLMs-based SR, we propose a novel framework, Distilling Sequential Pattern to Enhance LLMs-based Sequential Recommendation (DELRec), which aims to extract knowledge from conventional SR models and enable LLMs to easily comprehend and utilize the extracted knowledge for more effective SRs. DELRec consists of two main stages: 1) Distill Pattern from Conventional SR Models, focusing on extracting behavioral patterns exhibited by conventional SR models using soft prompts through two well-designed strategies; 2) LLMs-based Sequential Recommendation, aiming to fine-tune LLMs to effectively use the distilled auxiliary information to perform SR tasks. Extensive experimental results conducted on four real datasets validate the effectiveness of the DELRec framework.
翻译:序列推荐(SR)任务旨在通过学习用户行为序列并捕捉用户过往交互与其动态偏好之间的联系,以预测用户的下一次交互。传统的SR模型通常仅专注于捕捉训练数据中的序列模式,而忽略了来自外部源(如商品标题)的更广泛上下文和语义信息,这限制了其预测能力和适应性。近年来,大语言模型(LLMs)凭借其先进的理解能力和强大的泛化能力,在SR任务中展现出潜力。研究者尝试通过整合传统SR模型的信息来提升基于LLMs的推荐性能。然而,先前方法存在以下问题:1)有限的文本信息导致推荐性能不佳;2)LLMs对传统SR模型信息的理解与利用不充分;3)基于LLMs的方法过于复杂且可解释性低。为提升基于LLMs的SR性能,本文提出一种新颖框架——通过蒸馏序列模式增强基于大语言模型的序列推荐(DELRec),其目标是从传统SR模型中提取知识,并使LLMs能够轻松理解并利用所提取的知识以实现更有效的SR。DELRec包含两个主要阶段:1)从传统SR模型中蒸馏模式,重点通过两种精心设计的策略,利用软提示提取传统SR模型所表现出的行为模式;2)基于LLMs的序列推荐,旨在微调LLMs以有效利用蒸馏得到的辅助信息来执行SR任务。在四个真实数据集上进行的大量实验结果验证了DELRec框架的有效性。