Large Language Models (LLMs) have revolutionized Recommender Systems (RS) through advanced generative user modeling. However, LLM-based RS (LLM-RS) often inadvertently perpetuates bias present in the training data, leading to severe fairness issues. Addressing these fairness problems in LLM-RS faces two significant challenges. 1) Existing debiasing methods, designed for specific bias types, lack the generality to handle diverse or emerging biases in real-world applications. 2) Debiasing methods relying on retraining are computationally infeasible given the massive parameter scale of LLMs. To overcome these challenges, we propose FUDLR (Fast Unified Debiasing for LLM-RS). The core idea is to reformulate the debiasing problem as an efficient machine unlearning task with two stages. First, FUDLR identifies bias-inducing samples to unlearn through a novel bias-agnostic mask, optimized to balance fairness improvement with accuracy preservation. Its bias-agnostic design allows adaptability to various or co-existing biases simply by incorporating different fairness metrics. Second, FUDLR performs efficient debiasing by estimating and removing the influence of identified samples on model parameters. Extensive experiments demonstrate that FUDLR effectively and efficiently improves fairness while preserving recommendation accuracy, offering a practical path toward socially responsible LLM-RS. The code and data are available at https://github.com/JinLi-i/FUDLR.
翻译:大语言模型(LLMs)通过先进的生成式用户建模技术彻底革新了推荐系统(RS)。然而,基于大语言模型的推荐系统(LLM-RS)常常无意中延续了训练数据中存在的偏见,导致严重的公平性问题。解决LLM-RS中的公平性问题面临两大挑战:1)现有去偏方法专为特定偏见类型设计,缺乏应对现实应用中多样或新兴偏见的泛化能力;2)依赖模型重训练的去偏方法因LLMs海量参数规模而在计算上不可行。为克服这些挑战,我们提出FUDLR(面向LLM-RS的快速统一去偏框架)。其核心思想是将去偏问题重构为包含两个阶段的高效机器遗忘任务:首先,FUDLR通过新颖的偏见无关掩码识别需要遗忘的偏见诱导样本,该掩码经过优化以平衡公平性提升与精度保持。其偏见无关设计仅需结合不同公平性度量即可适应各种或共存的偏见类型。其次,FUDLR通过估计并消除已识别样本对模型参数的影响来实现高效去偏。大量实验表明,FUDLR在保持推荐精度的同时,能有效且高效地提升系统公平性,为构建社会责任的LLM-RS提供了实用路径。代码与数据公开于https://github.com/JinLi-i/FUDLR。