Federated learning (FL) is a new machine learning paradigm to overcome the challenge of data silos and has garnered significant attention. However, federated learning faces challenges in fairness and data privacy. To address both of the above challenges simultaneously, we first propose a fairness-aware federated learning algorithm, termed FedFair. Then based on FedFair, we introduce differential privacy protection to form the FedFDP algorithm to address the trade-offs among fairness, privacy protection, and model performance. In FedFDP, we designed an fairness-aware gradient clipping technique to identify the relationship between fairness and differential privacy. Through convergence analysis, we determined the optimal fairness adjustment parameters to simultaneously achieve the best model performance and fairness. Additionally, for the extra uploaded loss values, we present an adaptive clipping method to minimize privacy budget consumption. Extensive experimental results demonstrate that FedFDP significantly outperforms state-of-the-art solutions in terms of model performance and fairness. Codes and datasets will be made public after acceptance.
翻译:联邦学习(FL)是一种克服数据孤岛挑战的新型机器学习范式,已引起广泛关注。然而,联邦学习在公平性和数据隐私方面面临挑战。为同时应对上述两个挑战,我们首先提出了一种公平性感知的联邦学习算法,称为FedFair。随后,在FedFair基础上引入差分隐私保护,形成FedFDP算法,以解决公平性、隐私保护与模型性能之间的权衡问题。在FedFDP中,我们设计了一种公平性感知梯度裁剪技术,以揭示公平性与差分隐私之间的关联。通过收敛性分析,我们确定了最优的公平性调整参数,从而同时实现最佳模型性能与公平性。此外,针对额外上传的损失值,我们提出了一种自适应裁剪方法以最小化隐私预算消耗。大量实验结果表明,FedFDP在模型性能和公平性方面显著优于现有最优解决方案。代码与数据集将在论文录用后公开。