Large Language Models (LLMs) have revolutionized Recommender Systems (RS) through advanced generative user modeling. However, LLM-based RS (LLM-RS) often inadvertently perpetuates bias present in the training data, leading to severe fairness issues. Addressing these fairness problems in LLM-RS faces two significant challenges. 1) Existing debiasing methods, designed for specific bias types, lack the generality to handle diverse or emerging biases in real-world applications. 2) Debiasing methods relying on retraining are computationally infeasible given the massive parameter scale of LLMs. To overcome these challenges, we propose FUDLR (Fast Unified Debiasing for LLM-RS). The core idea is to reformulate the debiasing problem as an efficient machine unlearning task with two stages. First, FUDLR identifies bias-inducing samples to unlearn through a novel bias-agnostic mask, optimized to balance fairness improvement with accuracy preservation. Its bias-agnostic design allows adaptability to various or co-existing biases simply by incorporating different fairness metrics. Second, FUDLR performs efficient debiasing by estimating and removing the influence of identified samples on model parameters. Extensive experiments demonstrate that FUDLR effectively and efficiently improves fairness while preserving recommendation accuracy, offering a practical path toward socially responsible LLM-RS. The code and data are available at https://github.com/JinLi-i/FUDLR.
翻译:大语言模型(LLMs)通过先进的生成式用户建模技术彻底革新了推荐系统(RS)。然而,基于大语言模型的推荐系统(LLM-RS)常常无意中延续了训练数据中存在的偏见,导致严重的公平性问题。解决LLM-RS中的公平性问题面临两大挑战:1)现有去偏方法专为特定偏见类型设计,缺乏处理现实应用中多样化或新兴偏见的普适性;2)依赖模型重训练的去偏方法因大语言模型参数量巨大而在计算上不可行。为克服这些挑战,我们提出了FUDLR(面向LLM-RS的快速统一去偏框架)。其核心思想是将去偏问题重构为包含两个阶段的高效机器遗忘任务:首先,FUDLR通过一种新颖的与偏见类型无关的掩码机制识别需要遗忘的偏见诱导样本,该机制经过优化以平衡公平性提升与精度保持。其与偏见类型无关的设计允许通过整合不同的公平性指标,灵活适应多种或共存的偏见类型。其次,FUDLR通过估计并消除已识别样本对模型参数的影响,实现高效去偏。大量实验表明,FUDLR在保持推荐精度的同时,能有效且高效地提升公平性,为构建具有社会责任的LLM-RS提供了可行路径。代码与数据已公开于https://github.com/JinLi-i/FUDLR。