Federated learning (FL) has garnered considerable attention due to its privacy-preserving feature. Nonetheless, the lack of freedom in managing user data can lead to group fairness issues, where models are biased towards sensitive factors such as race or gender. To tackle this issue, this paper proposes a novel algorithm, fair federated averaging with augmented Lagrangian method (FFALM), designed explicitly to address group fairness issues in FL. Specifically, we impose a fairness constraint on the training objective and solve the minimax reformulation of the constrained optimization problem. Then, we derive the theoretical upper bound for the convergence rate of FFALM. The effectiveness of FFALM in improving fairness is shown empirically on CelebA and UTKFace datasets in the presence of severe statistical heterogeneity.
翻译:联邦学习(FL)因其隐私保护特性而备受关注。然而,由于无法直接管理用户数据,可能导致群体公平性问题,即模型对种族或性别等敏感因素产生偏见。为解决这一问题,本文提出了一种新颖的算法——基于增广拉格朗日方法的公平联邦平均(FFALM),该算法专门用于解决联邦学习中的群体公平性问题。具体而言,我们在训练目标中施加公平性约束,并求解该约束优化问题的极小极大重构形式。随后,我们推导了FFALM收敛速率的理论上界。在CelebA和UTKFace数据集上,即使在严重的统计异质性存在的情况下,实验结果表明FFALM在提升公平性方面具有显著效果。