Conversational recommender systems (CRSs) aim to capture user preferences and provide personalized recommendations through multi-round natural language dialogues. However, most existing CRS models mainly focus on dialogue comprehension and preferences mining from the current dialogue session, overlooking user preferences in historical dialogue sessions. The preferences embedded in the user's historical dialogue sessions and the current session exhibit continuity and sequentiality, and we refer to CRSs with this characteristic as sequential CRSs. In this work, we leverage memory-enhanced LLMs to model the preference continuity, primarily focusing on addressing two key issues: (1) redundancy and noise in historical dialogue sessions, and (2) the cold-start users problem. To this end, we propose a Memory-enhanced Conversational Recommender System Framework with Large Language Models (dubbed MemoCRS) consisting of user-specific memory and general memory. User-specific memory is tailored to each user for their personalized interests and implemented by an entity-based memory bank to refine preferences and retrieve relevant memory, thereby reducing the redundancy and noise of historical sessions. The general memory, encapsulating collaborative knowledge and reasoning guidelines, can provide shared knowledge for users, especially cold-start users. With the two kinds of memory, LLMs are empowered to deliver more precise and tailored recommendations for each user. Extensive experiments on both Chinese and English datasets demonstrate the effectiveness of MemoCRS.
翻译:会话推荐系统旨在通过多轮自然语言对话捕捉用户偏好并提供个性化推荐。然而,现有大多数会话推荐模型主要关注当前对话会话的理解与偏好挖掘,忽视了用户在历史对话会话中体现的偏好。用户历史对话会话与当前会话中蕴含的偏好具有连续性与时序性,我们将具备此特征的会话推荐系统称为序列化会话推荐系统。本研究利用记忆增强的大语言模型对偏好连续性进行建模,重点解决两个关键问题:(1)历史对话会话中的冗余与噪声;(2)冷启动用户问题。为此,我们提出了基于大语言模型的记忆增强型会话推荐系统框架(命名为MemoCRS),该系统包含用户专属记忆与通用记忆两部分。用户专属记忆通过基于实体的记忆库实现,可为每位用户定制个性化兴趣表征,从而优化偏好提取并检索相关记忆,以降低历史会话的冗余与噪声。通用记忆则封装了协同知识与推理指引,能为用户(特别是冷启动用户)提供共享知识。借助这两类记忆机制,大语言模型能够为每位用户提供更精准、更个性化的推荐。在中文与英文数据集上的大量实验验证了MemoCRS的有效性。