Achieving fairness across diverse clients in Federated Learning (FL) remains a significant challenge due to the heterogeneity of the data and the inaccessibility of sensitive attributes from clients' private datasets. This study addresses this issue by introducing \texttt{EquiFL}, a novel approach designed to enhance both local and global fairness in federated learning environments. \texttt{EquiFL} incorporates a fairness term into the local optimization objective, effectively balancing local performance and fairness. The proposed coordination mechanism also prevents bias from propagating across clients during the collaboration phase. Through extensive experiments across multiple benchmarks, we demonstrate that \texttt{EquiFL} not only strikes a better balance between accuracy and fairness locally at each client but also achieves global fairness. The results also indicate that \texttt{EquiFL} ensures uniform performance distribution among clients, thus contributing to performance fairness. Furthermore, we showcase the benefits of \texttt{EquiFL} in a real-world distributed dataset from a healthcare application, specifically in predicting the effects of treatments on patients across various hospital locations.
翻译:在联邦学习(FL)中,由于数据的异构性以及无法访问客户端私有数据集中的敏感属性,在不同客户端之间实现公平性仍然是一个重大挑战。本研究通过引入 \texttt{EquiFL} 来解决这一问题,这是一种旨在增强联邦学习环境中局部与全局公平性的新方法。\texttt{EquiFL} 在局部优化目标中加入了一个公平性项,有效地平衡了局部性能与公平性。所提出的协调机制还在协作阶段防止了偏见在客户端之间传播。通过在多个基准测试上进行广泛实验,我们证明 \texttt{EquiFL} 不仅在每个客户端的局部上更好地平衡了准确性与公平性,而且实现了全局公平性。结果还表明,\texttt{EquiFL} 确保了客户端之间性能的均匀分布,从而促进了性能公平性。此外,我们在一个来自医疗健康应用的真实世界分布式数据集上展示了 \texttt{EquiFL} 的优势,特别是在预测不同医院地点治疗对患者效果方面的应用。