Federated learning (FL) has emerged as a transformative distributed learning paradigm, enabling multiple clients to collaboratively train a global model under the coordination of a central server without sharing their raw training data. While FL offers notable advantages, it faces critical challenges in ensuring fairness across diverse demographic groups. To address these fairness concerns, various fairness-aware debiasing methods have been proposed. However, many of these approaches either require modifications to clients' training protocols or lack flexibility in their aggregation strategies. In this work, we address these limitations by introducing EquFL, a novel server-side debiasing method designed to mitigate bias in FL systems. EquFL operates by allowing the server to generate a single calibrated update after receiving model updates from the clients. This calibrated update is then integrated with the aggregated client updates to produce an adjusted global model that reduces bias. Theoretically, we establish that EquFL converges to the optimal global model achieved by FedAvg and effectively reduces fairness loss over training rounds. Empirically, we demonstrate that EquFL significantly mitigates bias within the system, showcasing its practical effectiveness.
翻译:联邦学习(FL)作为一种变革性的分布式学习范式,允许多个客户端在中央服务器的协调下协作训练全局模型,而无需共享其原始训练数据。尽管联邦学习具有显著优势,但在确保不同人口统计群体间的公平性方面仍面临关键挑战。为解决这些公平性问题,已提出了多种公平感知的去偏方法。然而,这些方法大多需要修改客户端的训练协议,或在聚合策略上缺乏灵活性。在本工作中,我们通过引入EquFL来解决这些局限性,这是一种新颖的服务端去偏方法,旨在减轻联邦学习系统中的偏见。EquFL的运作方式是允许服务器在接收客户端模型更新后生成一个单一的校准更新。随后,该校准更新与聚合的客户端更新相结合,生成一个减少偏见的调整后全局模型。理论上,我们证明了EquFL能够收敛至FedAvg所实现的最优全局模型,并在训练轮次中有效降低公平性损失。实证结果表明,EquFL显著减轻了系统内的偏见,展示了其实际有效性。