Recent advancements in Large Language Models (LLMs) have attracted considerable interest among researchers to leverage these models to enhance Recommender Systems (RSs). Existing work predominantly utilizes LLMs to generate knowledge-rich texts or utilizes LLM-derived embeddings as features to improve RSs. Al- though the extensive world knowledge embedded in LLMs generally benefits RSs, the application can only take limited number of users and items as inputs, without adequately exploiting collaborative filtering information. Considering its crucial role in RSs, one key challenge in enhancing RSs with LLMs lies in providing better collaborative filtering information through LLMs. In this paper, drawing inspiration from the in-context learning and chain of thought reasoning in LLMs, we propose the Large Language Models enhanced Collaborative Filtering (LLM-CF) framework, which distils the world knowledge and reasoning capabilities of LLMs into collaborative filtering. We also explored a concise and efficient instruction-tuning method, which improves the recommendation capabilities of LLMs while preserving their general functionalities (e.g., not decreasing on the LLM benchmark). Comprehensive experiments on three real-world datasets demonstrate that LLM-CF significantly enhances several backbone recommendation models and consistently outperforms competitive baselines, showcasing its effectiveness in distilling the world knowledge and reasoning capabilities of LLM into collaborative filtering.
翻译:近年来,大语言模型(LLMs)的进展引起了研究者的广泛关注,他们致力于利用这些模型来增强推荐系统(RSs)。现有工作主要利用LLMs生成富含知识的文本,或使用LLM衍生的嵌入特征来改进RSs。尽管LLMs中蕴含的广泛世界知识通常有益于RSs,但其应用只能处理有限数量的用户和物品输入,未能充分利用协同过滤信息。考虑到协同过滤在RSs中的关键作用,利用LLMs增强RSs的核心挑战之一在于通过LLMs提供更好的协同过滤信息。本文受LLMs中上下文学习与思维链推理的启发,提出了大语言模型增强的协同过滤(LLM-CF)框架,该框架将LLMs的世界知识与推理能力蒸馏到协同过滤中。我们还探索了一种简洁高效的指令微调方法,该方法在保持LLMs通用功能(例如,在LLM基准测试中性能不下降)的同时,提升了其推荐能力。在三个真实世界数据集上的综合实验表明,LLM-CF显著增强了多个骨干推荐模型,并持续优于竞争基线方法,展示了其在将LLM的世界知识与推理能力蒸馏到协同过滤中的有效性。