Personalized federated learning (PFL) offers a solution to balancing personalization and generalization by conducting federated learning (FL) to guide personalized learning (PL). Little attention has been given to wireless PFL (WPFL), where privacy concerns arise. Performance fairness of PL models is another challenge resulting from communication bottlenecks in WPFL. This paper exploits quantization errors to enhance the privacy of WPFL and proposes a novel quantization-assisted Gaussian differential privacy (DP) mechanism. We analyze the convergence upper bounds of individual PL models by considering the impact of the mechanism (i.e., quantization errors and Gaussian DP noises) and imperfect communication channels on the FL of WPFL. By minimizing the maximum of the bounds, we design an optimal transmission scheduling strategy that yields min-max fairness for WPFL with OFDMA interfaces. This is achieved by revealing the nested structure of this problem to decouple it into subproblems solved sequentially for the client selection, channel allocation, and power control, and for the learning rates and PL-FL weighting coefficients. Experiments validate our analysis and demonstrate that our approach substantially outperforms alternative scheduling strategies by 87.08%, 16.21%, and 38.37% in accuracy, the maximum test loss of participating clients, and fairness (Jain's index), respectively.
翻译:个性化联邦学习(PFL)通过执行联邦学习(FL)来指导个性化学习(PL),为平衡个性化与泛化提供了一种解决方案。无线PFL(WPFL)中存在的隐私问题尚未得到充分关注。此外,由于WPFL中的通信瓶颈,PL模型的性能公平性是另一项挑战。本文利用量化误差增强WPFL的隐私性,并提出一种新颖的量化辅助高斯差分隐私(DP)机制。我们通过分析该机制(即量化误差与高斯DP噪声)以及不完善通信信道对WPFL中FL过程的影响,推导出各PL模型的收敛上界。通过最小化这些上界的最大值,我们为采用OFDMA接口的WPFL设计了一种最优传输调度策略,实现了最小最大公平性。该策略通过揭示问题的嵌套结构,将其解耦为客户端选择、信道分配与功率控制、以及学习率与PL-FL加权系数等子问题并依次求解。实验验证了我们的理论分析,并表明所提方法在准确率、参与客户端的最大测试损失以及公平性(Jain指数)方面分别显著优于其他调度策略87.08%、16.21%和38.37%。