Statistical heterogeneity is a root cause of tension among accuracy, fairness, and robustness of federated learning (FL), and is key in paving a path forward. Personalized FL (PFL) is an approach that aims to reduce the impact of statistical heterogeneity by developing personalized models for individual users, while also inherently providing benefits in terms of fairness and robustness. However, existing PFL frameworks focus on improving the performance of personalized models while neglecting the global model. Moreover, these frameworks achieve sublinear convergence rates and rely on strong assumptions. In this paper, we propose FLAME, an optimization framework by utilizing the alternating direction method of multipliers (ADMM) to train personalized and global models. We propose a model selection strategy to improve performance in situations where clients have different types of heterogeneous data. Our theoretical analysis establishes the global convergence and two kinds of convergence rates for FLAME under mild assumptions. We theoretically demonstrate that FLAME is more robust and fair than the state-of-the-art methods on a class of linear problems. Our experimental findings show that FLAME outperforms state-of-the-art methods in convergence and accuracy, and it achieves higher test accuracy under various attacks and performs more uniformly across clients.
翻译:统计异构性是联邦学习(FL)中准确性、公平性与鲁棒性之间产生张力的根本原因,也是推动其向前发展的关键。个性化联邦学习(PFL)是一种旨在通过为个体用户开发个性化模型来减轻统计异构性影响的方法,同时其本身也在公平性和鲁棒性方面带来益处。然而,现有的PFL框架侧重于提升个性化模型的性能,却忽视了全局模型。此外,这些框架仅能达到次线性收敛速率,并且依赖于强假设。本文提出FLAME,一种利用交替方向乘子法(ADMM)来训练个性化模型与全局模型的优化框架。我们提出了一种模型选择策略,以在客户端拥有不同类型异构数据的情况下提升性能。我们的理论分析在温和假设下建立了FLAME的全局收敛性以及两种收敛速率。我们从理论上证明,在一类线性问题上,FLAME比现有最先进方法更具鲁棒性和公平性。实验结果表明,FLAME在收敛速度和准确性上均优于现有最先进方法,并且在各种攻击下获得更高的测试准确率,同时在客户端间表现更为均匀。