Recently, researchers have investigated the capabilities of Large Language Models (LLMs) for generative recommender systems. Existing LLM-based recommender models are trained by adding user and item IDs to a discrete prompt template. However, the disconnect between IDs and natural language makes it difficult for the LLM to learn the relationship between users. To address this issue, we propose a PErsonAlized PrOmpt Distillation (PeaPOD) approach, to distill user preferences as personalized soft prompts. Considering the complexities of user preferences in the real world, we maintain a shared set of learnable prompts that are dynamically weighted based on the user's interests to construct the user-personalized prompt in a compositional manner. Experimental results on three real-world datasets demonstrate the effectiveness of our PeaPOD model on sequential recommendation, top-n recommendation, and explanation generation tasks.
翻译:近年来,研究者开始探索大型语言模型在生成式推荐系统中的能力。现有的基于LLM的推荐模型通过将用户和物品ID添加至离散提示模板中进行训练。然而,ID与自然语言之间的割裂使得LLM难以学习用户间的关系。为解决这一问题,我们提出一种个性化提示蒸馏方法,将用户偏好蒸馏为个性化软提示。考虑到现实场景中用户偏好的复杂性,我们维护一组共享的可学习提示,并根据用户兴趣动态加权,以组合方式构建用户个性化提示。在三个真实数据集上的实验结果表明,我们的模型在序列推荐、Top-N推荐和解释生成任务上均具有有效性。