Modern recommender systems aim to deeply understand users' complex preferences through their past interactions. While deep collaborative filtering approaches using Graph Neural Networks (GNNs) excel at capturing user-item relationships, their effectiveness is limited when handling sparse data or zero-shot scenarios, primarily due to constraints in ID-based embedding functions. To address these challenges, we propose a model-agnostic recommendation instruction-tuning paradigm that seamlessly integrates large language models with collaborative filtering. Our proposed $\underline{Rec}$ommendation $\underline{L}$anguage $\underline{M}$odel (RecLM) enhances the capture of user preference diversity through a carefully designed reinforcement learning reward function that facilitates self-augmentation of language models. Comprehensive evaluations demonstrate significant advantages of our approach across various settings, and its plug-and-play compatibility with state-of-the-art recommender systems results in notable performance enhancements. The implementation of our RecLM framework is publicly available at: https://github.com/HKUDS/RecLM.
翻译:现代推荐系统旨在通过用户的历史交互深入理解其复杂偏好。尽管基于图神经网络(GNNs)的深度协同过滤方法在捕捉用户-物品关系方面表现出色,但在处理稀疏数据或零样本场景时,其有效性受到基于ID的嵌入函数固有局限性的制约。为应对这些挑战,我们提出一种与模型无关的推荐指令微调范式,将大语言模型与协同过滤无缝集成。我们提出的推荐语言模型(RecLM)通过精心设计的强化学习奖励函数增强对用户偏好多样性的捕捉,该函数促进了语言模型的自增强能力。综合评估表明,我们的方法在各种场景下均展现出显著优势,且其即插即用特性与前沿推荐系统兼容,可带来显著的性能提升。RecLM框架的实现已公开于:https://github.com/HKUDS/RecLM。