Contemporary recommendation systems predominantly rely on ID embedding to capture latent associations among users and items. However, this approach overlooks the wealth of semantic information embedded within textual descriptions of items, leading to suboptimal performance and poor generalizations. Leveraging the capability of large language models to comprehend and reason about textual content presents a promising avenue for advancing recommendation systems. To achieve this, we propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge. We address computational complexity concerns by utilizing pretrained LLMs as item encoders and freezing LLM parameters to avoid catastrophic forgetting and preserve open-world knowledge. To bridge the gap between the open-world and collaborative domains, we design a twin-tower structure supervised by the recommendation task and tailored for practical industrial application. Through experiments on the real large-scale industrial dataset and online A/B tests, we demonstrate the efficacy of our approach in industry application. We also achieve state-of-the-art performance on six Amazon Review datasets to verify the superiority of our method.
翻译:当代推荐系统主要依赖ID嵌入来捕捉用户与物品间的潜在关联。然而,这种方法忽略了物品文本描述中蕴含的丰富语义信息,导致性能欠佳且泛化能力不足。利用大语言模型理解和推理文本内容的能力,为推进推荐系统发展提供了前景广阔的途径。为此,我们提出一种大语言模型驱动的知识自适应推荐(LEARN)框架,将开放世界知识与协同知识进行融合。我们通过使用预训练大语言模型作为物品编码器并冻结其参数,以解决计算复杂度问题,避免灾难性遗忘并保持开放世界知识。为弥合开放世界与协同领域之间的鸿沟,我们设计了由推荐任务监督的双塔结构,并针对实际工业应用进行定制。通过在真实大规模工业数据集上的实验及在线A/B测试,我们验证了该方法在工业应用中的有效性。我们在六个亚马逊评论数据集上实现了最先进的性能,进一步证明了本方法的优越性。