Recent advances in Foundation Models such as Large Language Models (LLMs) have propelled them to the forefront of Recommender Systems (RS). Despite their utility, there is a growing concern that LLMs might inadvertently perpetuate societal stereotypes, resulting in unfair recommendations. Since fairness is critical for RS as many users take it for decision-making and demand fulfillment, this paper focuses on user-side fairness for LLM-based recommendation where the users may require a recommender system to be fair on specific sensitive features such as gender or age. In this paper, we dive into the extent of unfairness exhibited by LLM-based recommender models based on both T5 and LLaMA backbones, and discuss appropriate methods for promoting equitable treatment of users in LLM-based recommendation models. We introduce a novel Counterfactually-Fair-Prompt (CFP) method towards Unbiased Foundation mOdels (UFO) for fairness-aware LLM-based recommendation. Experiments are conducted on two real-world datasets, MovieLens-1M and Insurance, and compared with both matching-based and sequential-based fairness-aware recommendation models. Results show that CFP achieves better recommendation performance with a high level of fairness. Data and code are open-sourced at https://github.com/agiresearch/UP5.
翻译:近期,以大型语言模型(LLM)为代表的基础模型在推荐系统(RS)领域取得了显著进展并占据前沿地位。尽管其实用性突出,但人们日益担忧LLM可能无意中延续社会固有偏见,导致不公平的推荐结果。鉴于公平性对推荐系统至关重要——许多用户依赖其进行决策和需求满足,本文聚焦于基于LLM的推荐系统中的用户侧公平性问题,即用户可能要求推荐系统在性别、年龄等特定敏感特征上保持公平。本文深入探究了基于T5和LLaMA架构的LLM推荐模型所表现出的不公平程度,并探讨了在基于LLM的推荐模型中促进用户公平对待的适用方法。我们提出了一种新颖的反事实公平提示(CFP)方法,旨在构建面向公平感知LLM推荐的无偏基础模型(UFO)。我们在MovieLens-1M和Insurance两个真实数据集上进行实验,并与基于匹配和基于序列的公平感知推荐模型进行对比。结果表明,CFP在实现高水平公平性的同时获得了更优的推荐性能。数据和代码已开源:https://github.com/agiresearch/UP5。