Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training. However, due to factors ranging from emerging regulations to dynamics and location-dependency of protected groups, this assumption may be unsuitable in many real-world scenarios. In this work, we propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels. Our objective allows the federation to learn a Pareto efficient global model ensuring worst-case group fairness and it enables, via a single hyper-parameter, trade-offs between fairness and utility, subject only to a group size constraint. This implies that any sufficiently large subset of the population is guaranteed to receive at least a minimum level of utility performance from the model. The proposed objective encompasses existing approaches as special cases, such as empirical risk minimization and subgroup robustness objectives from centralized machine learning. We provide an algorithm to solve this problem in federation that enjoys convergence and excess risk guarantees. Our empirical results indicate that the proposed approach can effectively improve the worst-performing group that may be present without unnecessarily hurting the average performance, exhibits superior or comparable performance to relevant baselines, and achieves a large set of solutions with different fairness-utility trade-offs.
翻译:当前联邦学习中群体公平性的主流方法假设训练过程中存在预定义且有标注的敏感群体。然而,由于新兴法规、敏感群体的动态变化及地域依赖性等因素,这一假设在许多现实场景中可能并不适用。本文提出一种无需预定义敏感群体或额外标注即可保障群体公平性的新方法。我们的目标函数使联邦系统能够学习一个确保最差群体公平性的帕累托最优全局模型,并通过单一超参数实现公平性与效用之间的权衡,仅受限于群体规模约束。这意味着人口中任意足够大的子集都能从模型中获得至少最低水平的效用表现。所提出的目标函数涵盖了现有方法(如经验风险最小化和集中式机器学习中的子群鲁棒性目标)作为特例。我们提供了一种可在联邦环境下求解该问题的算法,该算法具有收敛性和过额风险保证。实验结果表明,所提方法能够有效改善可能存在的表现最差群体,同时不会不必要地损害平均性能,在相关基准测试中展现出优越或相当的性能,并能生成一组涵盖不同公平性-效用权衡的多样化解决方案。