Large language model (LLM)-based recommender models that bridge users and items through textual prompts for effective semantic reasoning have gained considerable attention. However, few methods consider the underlying rationales behind interactions, such as user preferences and item attributes, limiting the reasoning capability of LLMs for recommendations. This paper proposes a rationale distillation recommender (RDRec), a compact model designed to learn rationales generated by a larger language model (LM). By leveraging rationales from reviews related to users and items, RDRec remarkably specifies their profiles for recommendations. Experiments show that RDRec achieves state-of-the-art (SOTA) performance in both top-N and sequential recommendations. Our source code is released at https://github.com/WangXFng/RDRec.
翻译:基于大语言模型(LLM)的推荐模型通过文本提示连接用户与物品以实现有效的语义推理,已获得广泛关注。然而,现有方法鲜少考虑交互背后的潜在理由(如用户偏好与物品属性),这限制了LLM在推荐中的推理能力。本文提出理由蒸馏推荐器(RDRec)——一种紧凑型模型,旨在学习由更大语言模型(LM)生成的推理理由。通过利用与用户和物品相关的评论文本中的理由,RDRec显著精化了其用户画像与物品画像以支持推荐。实验表明,RDRec在Top-N推荐和序列推荐中均达到了当前最优(SOTA)性能。源代码已发布于https://github.com/WangXFng/RDRec。