Recent studies empirically indicate that language models (LMs) encode rich world knowledge beyond mere semantics, attracting significant attention across various fields. However, in the recommendation domain, it remains uncertain whether LMs implicitly encode user preference information. Contrary to prevailing understanding that LMs and traditional recommenders learn two distinct representation spaces due to the huge gap in language and behavior modeling objectives, this work re-examines such understanding and explores extracting a recommendation space directly from the language representation space. Surprisingly, our findings demonstrate that item representations, when linearly mapped from advanced LM representations, yield superior recommendation performance. This outcome suggests the possible homomorphism between the advanced language representation space and an effective item representation space for recommendation, implying that collaborative signals may be implicitly encoded within LMs. Motivated by these findings, we explore the possibility of designing advanced collaborative filtering (CF) models purely based on language representations without ID-based embeddings. To be specific, we incorporate several crucial components to build a simple yet effective model, with item titles as the input. Empirical results show that such a simple model can outperform leading ID-based CF models, which sheds light on using language representations for better recommendation. Moreover, we systematically analyze this simple model and find several key features for using advanced language representations: a good initialization for item representations, zero-shot recommendation abilities, and being aware of user intention. Our findings highlight the connection between language modeling and behavior modeling, which can inspire both natural language processing and recommender system communities.
翻译:近期研究经验性地表明,语言模型(LMs)编码了超越单纯语义的丰富世界知识,这引起了多个领域的广泛关注。然而,在推荐领域,语言模型是否隐式编码了用户偏好信息仍不明确。与普遍认为语言模型和传统推荐器因语言建模与行为建模目标间存在巨大差异而学习两个截然不同的表征空间这一观点相反,本研究重新审视了这一理解,并探索直接从语言表征空间中提取推荐空间的可能性。令人惊讶的是,我们的研究结果表明,当从先进语言模型表征中线性映射得到物品表征时,能够产生卓越的推荐性能。这一结果暗示了先进语言表征空间与有效推荐物品表征空间之间可能存在同态关系,表明协同信号可能已隐式编码于语言模型之中。受这些发现的启发,我们探索了完全基于语言表征(无需基于ID的嵌入)设计先进协同过滤(CF)模型的可能性。具体而言,我们整合了若干关键组件构建了一个简单而有效的模型,并以物品标题作为输入。实证结果表明,这样一个简单模型能够超越主流的基于ID的协同过滤模型,这为利用语言表征实现更优推荐提供了启示。此外,我们系统分析了这一简单模型,并发现了使用先进语言表征的几个关键特性:良好的物品表征初始化能力、零样本推荐能力以及对用户意图的感知能力。我们的研究结果凸显了语言建模与行为建模之间的联系,可为自然语言处理和推荐系统领域的研究提供新的启发。