Benefiting from the strong reasoning capabilities, Large language models (LLMs) have demonstrated remarkable performance in recommender systems. Various efforts have been made to distill knowledge from LLMs to enhance collaborative models, employing techniques like contrastive learning for representation alignment. In this work, we prove that directly aligning the representations of LLMs and collaborative models is sub-optimal for enhancing downstream recommendation tasks performance, based on the information theorem. Consequently, the challenge of effectively aligning semantic representations between collaborative models and LLMs remains unresolved. Inspired by this viewpoint, we propose a novel plug-and-play alignment framework for LLMs and collaborative models. Specifically, we first disentangle the latent representations of both LLMs and collaborative models into specific and shared components via projection layers and representation regularization. Subsequently, we perform both global and local structure alignment on the shared representations to facilitate knowledge transfer. Additionally, we theoretically prove that the specific and shared representations contain more pertinent and less irrelevant information, which can enhance the effectiveness of downstream recommendation tasks. Extensive experimental results on benchmark datasets demonstrate that our method is superior to existing state-of-the-art algorithms.
翻译:受益于强大的推理能力,大语言模型在推荐系统中展现出卓越性能。现有研究通过对比学习等表征对齐技术,致力于从大语言模型中提取知识以增强协同过滤模型。本研究基于信息论证明,直接对齐大语言模型与协同过滤模型的表征对于提升下游推荐任务性能具有次优性。因此,如何有效实现协同过滤模型与大语言模型之间的语义表征对齐仍是一个未解难题。受此观点启发,我们提出一种面向大语言模型与协同过滤模型的新型即插即用对齐框架。具体而言,我们首先通过投影层与表征正则化,将大语言模型和协同过滤模型的潜在表征解耦为特定成分与共享成分。随后,我们对共享表征执行全局与局部结构对齐以促进知识迁移。此外,我们从理论上证明特定表征与共享表征分别包含更多相关信息和更少无关信息,从而能够提升下游推荐任务的有效性。在基准数据集上的大量实验结果表明,本方法优于现有最先进算法。