Federated learning (FL) is an emerging machine learning paradigm designed to address the challenge of data silos, attracting considerable attention. However, FL encounters persistent issues related to fairness and data privacy. To tackle these challenges simultaneously, we propose a fairness-aware federated learning algorithm called FedFair. Building on FedFair, we introduce differential privacy to create the FedFDP algorithm, which addresses trade-offs among fairness, privacy protection, and model performance. In FedFDP, we developed a fairness-aware gradient clipping technique to explore the relationship between fairness and differential privacy. Through convergence analysis, we identified the optimal fairness adjustment parameters to achieve both maximum model performance and fairness. Additionally, we present an adaptive clipping method for uploaded loss values to reduce privacy budget consumption. Extensive experimental results show that FedFDP significantly surpasses state-of-the-art solutions in both model performance and fairness.
翻译:联邦学习作为一种新兴的机器学习范式,旨在应对数据孤岛挑战,已引起广泛关注。然而,联邦学习长期面临公平性与数据隐私方面的问题。为同时应对这些挑战,我们提出了一种兼顾公平性的联邦学习算法FedFair。在此基础上,我们引入差分隐私技术,构建了FedFDP算法,以解决公平性、隐私保护与模型性能之间的权衡问题。在FedFDP中,我们开发了一种公平感知的梯度裁剪技术,用以探究公平性与差分隐私之间的关联。通过收敛性分析,我们确定了实现最优模型性能与公平性的最佳公平性调节参数。此外,我们提出了一种针对上传损失值的自适应裁剪方法,以降低隐私预算消耗。大量实验结果表明,FedFDP在模型性能与公平性方面均显著优于现有最优解决方案。