The rapid development of Large Language Models (LLMs) creates new opportunities for recommender systems, especially by exploiting the side information (e.g., descriptions and analyses of items) generated by these models. However, aligning this side information with collaborative information from historical interactions poses significant challenges. The inherent biases within LLMs can skew recommendations, resulting in distorted and potentially unfair user experiences. On the other hand, propensity bias causes side information to be aligned in such a way that it often tends to represent all inputs in a low-dimensional subspace, leading to a phenomenon known as dimensional collapse, which severely restricts the recommender system's ability to capture user preferences and behaviours. To address these issues, we introduce a novel framework named Counterfactual LLM Recommendation (CLLMR). Specifically, we propose a spectrum-based side information encoder that implicitly embeds structural information from historical interactions into the side information representation, thereby circumventing the risk of dimension collapse. Furthermore, our CLLMR approach explores the causal relationships inherent in LLM-based recommender systems. By leveraging counterfactual inference, we counteract the biases introduced by LLMs. Extensive experiments demonstrate that our CLLMR approach consistently enhances the performance of various recommender models.
翻译:大语言模型(LLMs)的快速发展为推荐系统带来了新的机遇,特别是通过利用这些模型生成的辅助信息(例如,项目的描述和分析)。然而,将这些辅助信息与来自历史交互的协同信息对齐面临着重大挑战。大语言模型内部固有的偏差可能会扭曲推荐结果,导致失真且可能不公平的用户体验。另一方面,倾向性偏差导致辅助信息以这样一种方式对齐,即它往往将所有输入表示在一个低维子空间中,导致一种被称为维度坍缩的现象,这严重限制了推荐系统捕捉用户偏好和行为的能力。为了解决这些问题,我们引入了一个名为反事实大语言模型推荐(CLLMR)的新框架。具体而言,我们提出了一种基于谱的辅助信息编码器,该编码器将来自历史交互的结构信息隐式嵌入到辅助信息表示中,从而规避了维度坍缩的风险。此外,我们的CLLMR方法探索了基于大语言模型的推荐系统中固有的因果关系。通过利用反事实推理,我们抵消了大语言模型引入的偏差。大量实验表明,我们的CLLMR方法持续提升了各种推荐模型的性能。