Contemporary recommender systems predominantly rely on collaborative filtering techniques, employing ID-embedding to capture latent associations among users and items. However, this approach overlooks the wealth of semantic information embedded within textual descriptions of items, leading to suboptimal performance in cold-start scenarios and long-tail user recommendations. Leveraging the capabilities of Large Language Models (LLMs) pretrained on massive text corpus presents a promising avenue for enhancing recommender systems by integrating open-world domain knowledge. In this paper, we propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge. We address computational complexity concerns by utilizing pretrained LLMs as item encoders and freezing LLM parameters to avoid catastrophic forgetting and preserve open-world knowledge. To bridge the gap between the open-world and collaborative domains, we design a twin-tower structure supervised by the recommendation task and tailored for practical industrial application. Through offline experiments on the large-scale industrial dataset and online experiments on A/B tests, we demonstrate the efficacy of our approach.
翻译:当代推荐系统主要依赖协同过滤技术,采用ID嵌入(ID-embedding)来捕捉用户与项目之间的潜在关联。然而,这种方法忽视了项目文本描述中蕴含的丰富语义信息,导致在冷启动场景和长尾用户推荐中效果欠佳。利用在海量文本语料上预训练的大语言模型(LLMs)的能力,为通过整合开放世界领域知识来增强推荐系统提供了一条有前景的路径。本文提出了一种融合开放世界知识与协同知识的LLM驱动知识自适应推荐(LEARN)框架。我们通过采用预训练LLM作为项目编码器并冻结其参数来避免灾难性遗忘、保持开放世界知识,从而解决了计算复杂性难题。为弥合开放世界领域与协同领域之间的差距,我们设计了一种受推荐任务监督且适用于实际工业应用的双塔结构。通过大规模工业数据集上的离线实验和A/B测试上的在线实验,我们证明了该方法的有效性。