Personalized Federated Learning (FL) algorithms collaboratively train customized models for each client, enhancing the accuracy of the learned models on the client's local data (e.g., by clustering similar clients, by fine-tuning models locally, or by imposing regularization terms). In this paper, we investigate the impact of such personalization techniques on the group fairness of the learned models, and show that personalization can also lead to improved (local) fairness as an unintended benefit. We begin by illustrating these benefits of personalization through numerical experiments comparing several classes of personalized FL algorithms against a baseline FedAvg algorithm, elaborating on the reasons behind improved fairness using personalized FL, and then providing analytical support. Motivated by these, we then show how to build on this (unintended) fairness benefit, by further integrating a fairness metric into the cluster-selection procedure of clustering-based personalized FL algorithms, and improve the fairness-accuracy trade-off attainable through them. Specifically, we propose two new fairness-aware federated clustering algorithms, Fair-FCA and Fair-FL+HC, extending the existing IFCA and FL+HC algorithms, and demonstrate their ability to strike a (tuneable) balance between accuracy and fairness at the client level.
翻译:个性化联邦学习算法为每个客户端协作训练定制化模型,通过聚类相似客户端、本地微调模型或施加正则化项等方式,提升学习模型在客户端本地数据上的准确性。本文研究此类个性化技术对学习模型群体公平性的影响,并证明个性化可能意外带来本地公平性的提升。我们首先通过数值实验比较多类个性化联邦学习算法与基准FedAvg算法,阐明个性化联邦学习改善公平性的原因并提供理论支持,以此展示个性化带来的公平性收益。基于此,我们进一步将公平性度量整合至基于聚类的个性化联邦学习算法的集群选择流程,从而优化其可实现的公平性与准确性权衡。具体而言,我们提出两种新型公平感知联邦聚类算法Fair-FCA与Fair-FL+HC,分别扩展现有IFCA和FL+HC算法,并证明它们能在客户端层面实现准确性与公平性之间的可调节平衡。