Personalized federated learning (PFL) is an approach proposed to address the issue of poor convergence on heterogeneous data. However, most existing PFL frameworks require strong assumptions for convergence. In this paper, we propose an alternating direction method of multipliers (ADMM) for training PFL models with Moreau envelope (FLAME), which achieves a sublinear convergence rate, relying on the relatively weak assumption of gradient Lipschitz continuity. Moreover, due to the gradient-free nature of ADMM, FLAME alleviates the need for hyperparameter tuning, particularly in avoiding the adjustment of the learning rate when training the global model. In addition, we propose a biased client selection strategy to expedite the convergence of training of PFL models. Our theoretical analysis establishes the global convergence under both unbiased and biased client selection strategies. Our experiments validate that FLAME, when trained on heterogeneous data, outperforms state-of-the-art methods in terms of model performance. Regarding communication efficiency, it exhibits an average speedup of 3.75x compared to the baselines. Furthermore, experimental results validate that the biased client selection strategy speeds up the convergence of both personalized and global models.
翻译:个性化联邦学习(PFL)是为解决异构数据上收敛效果差而提出的一种方法。然而,现有的大多数PFL框架需要较强的收敛性假设。本文提出了一种基于交替方向乘子法(ADMM)并利用Moreau包络训练PFL模型的方法(FLAME),该方法在相对较弱的梯度Lipschitz连续性假设下,实现了次线性收敛速率。此外,由于ADMM的无梯度特性,FLAME减少了对超参数调优的需求,特别是在训练全局模型时避免了学习率的调整。另外,我们提出了一种有偏的客户端选择策略,以加速PFL模型的训练收敛。我们的理论分析证明了在无偏和有偏客户端选择策略下均能实现全局收敛。实验验证表明,FLAME在异构数据上训练时,其模型性能优于现有最先进方法。在通信效率方面,相较于基线方法,其平均加速比达到3.75倍。此外,实验结果验证了所提出的有偏客户端选择策略能够同时加速个性化模型和全局模型的收敛。