Recent advancements in Large Language Models (LLMs) have attracted considerable interest among researchers to leverage these models to enhance Recommender Systems (RSs). Existing work predominantly utilizes LLMs to generate knowledge-rich texts or utilizes LLM-derived embeddings as features to improve RSs. Although the extensive world knowledge embedded in LLMs generally benefits RSs, the application can only take limited number of users and items as inputs, without adequately exploiting collaborative filtering information. Considering its crucial role in RSs, one key challenge in enhancing RSs with LLMs lies in providing better collaborative filtering information through LLMs. In this paper, drawing inspiration from the in-context learning and chain of thought reasoning in LLMs, we propose the Large Language Models enhanced Collaborative Filtering (LLM-CF) framework, which distils the world knowledge and reasoning capabilities of LLMs into collaborative filtering. We also explored a concise and efficient instruction-tuning method, which improves the recommendation capabilities of LLMs while preserving their general functionalities (e.g., not decreasing on the LLM benchmark). Comprehensive experiments on three real-world datasets demonstrate that LLM-CF significantly enhances several backbone recommendation models and consistently outperforms competitive baselines, showcasing its effectiveness in distilling the world knowledge and reasoning capabilities of LLM into collaborative filtering.
翻译:近年来,大语言模型(LLMs)的进展引起了研究者们利用这些模型来增强推荐系统(RSs)的浓厚兴趣。现有工作主要利用LLMs生成富含知识的文本,或利用LLM衍生的嵌入作为特征来改进RSs。尽管LLMs中蕴含的广泛世界知识通常对RSs有益,但其应用只能接受有限数量的用户和物品作为输入,未能充分挖掘协同过滤信息。考虑到协同过滤在RSs中的关键作用,利用LLMs增强RSs的一个核心挑战在于如何通过LLMs提供更好的协同过滤信息。本文受LLMs中的上下文学习和思维链推理启发,提出了大语言模型增强的协同过滤(LLM-CF)框架,该框架将LLMs的世界知识和推理能力提炼到协同过滤中。我们还探索了一种简洁高效的指令微调方法,该方法在提升LLMs推荐能力的同时,保留了其通用功能(例如,在LLM基准测试上性能不下降)。在三个真实世界数据集上的全面实验表明,LLM-CF显著增强了多个骨干推荐模型,并持续优于竞争基线,证明了其将LLM的世界知识与推理能力提炼到协同过滤中的有效性。