Despite growing efforts to mitigate unfairness in recommender systems, existing fairness-aware methods typically fix the fairness requirement at training time and provide limited post-training flexibility. However, in real-world scenarios, diverse stakeholders may demand differing fairness requirements over time, so retraining for different fairness requirements becomes prohibitive. To address this limitation, we propose Cofair, a single-train framework that enables post-training fairness control in recommendation. Specifically, Cofair introduces a shared representation layer with fairness-conditioned adapter modules to produce user embeddings specialized for varied fairness levels, along with a user-level regularization term that guarantees user-wise monotonic fairness improvements across these levels. We theoretically establish that the adversarial objective of Cofair upper bounds demographic parity and the regularization term enforces progressive fairness at user level. Comprehensive experiments on multiple datasets and backbone models demonstrate that our framework provides dynamic fairness at different levels, delivering comparable or better fairness-accuracy curves than state-of-the-art baselines, without the need to retrain for each new fairness requirement. Our code is publicly available at https://github.com/weixinchen98/Cofair.
翻译:尽管在减轻推荐系统不公平性方面已有越来越多的努力,但现有的公平性感知方法通常在训练时固定公平性要求,并在训练后提供有限的灵活性。然而,在实际场景中,不同的利益相关者可能随时间推移提出不同的公平性要求,因此为不同的公平性要求重新训练模型变得难以承受。为解决这一局限,我们提出了Cofair,一个能够在推荐中实现后训练公平性控制的单次训练框架。具体而言,Cofair引入了一个带有公平性条件适配器模块的共享表示层,以生成针对不同公平性级别定制的用户嵌入,同时结合一个用户级正则化项,确保在这些级别上实现用户层面的单调公平性改进。我们从理论上证明了Cofair的对抗目标上界了人口统计均等性,并且正则化项在用户层面强制执行渐进公平性。在多个数据集和骨干模型上的综合实验表明,我们的框架能够在不同级别提供动态公平性,与最先进的基线方法相比,提供了可比或更优的公平性-准确性曲线,而无需为每个新的公平性要求重新训练模型。我们的代码公开在 https://github.com/weixinchen98/Cofair。