Federated learning (FL) is a new machine learning paradigm to overcome the challenge of data silos and has garnered significant attention. However, through our observations, a globally effective trained model may performance disparities in different clients. This implies that the jointly trained models by clients may lead to unfair outcomes. On the other hand, relevant studies indicate that the transmission of gradients or models in federated learning can also give rise to privacy leakage issues, such as membership inference attacks. To address the first issue mentioned above, we propose a fairness-aware federated learning algorithm, termed FedFair. Building upon FedFair, we introduce privacy protection to form the FedFDP algorithm to address the second issue mentioned above. In FedFDP, we devise a fairness-aware clipping strategy to achieve differential privacy while adjusting fairness. Additionally, for the extra uploaded loss values, we present an adaptive clipping approach to maximize utility. Furthermore, we theoretically prove that our algorithm converges and ensures differential privacy. Lastly, extensive experimental results demonstrate that FedFair and FedFDP significantly outperform state-of-the-art solutions in terms of model performance and fairness. Code and data is accessible at https://anonymous.4open.science/r/FedFDP-5607.
翻译:联邦学习(FL)是一种克服数据孤岛挑战的新型机器学习范式,并已获得广泛关注。然而,我们观察到,全局训练出的有效模型在不同客户端上可能表现出性能差异,这意味着客户端联合训练的模型可能导致不公平的结果。另一方面,相关研究表明,联邦学习中梯度或模型的传输也可能引发隐私泄露问题,例如成员推断攻击。针对上述第一个问题,我们提出了一种公平感知的联邦学习算法,称为FedFair。在FedFair的基础上,我们引入隐私保护以形成FedFDP算法,以解决上述第二个问题。在FedFDP中,我们设计了一种公平感知的裁剪策略,以在调整公平性的同时实现差分隐私。此外,针对额外上传的损失值,我们提出了一种自适应裁剪方法来最大化效用。同时,我们从理论上证明我们的算法收敛且能保证差分隐私。最后,大量实验结果表明,FedFair和FedFDP在模型性能和公平性方面显著优于现有最优解决方案。代码和数据可在https://anonymous.4open.science/r/FedFDP-5607获取。