Federated learning (FL) and federated distillation (FD) are distributed learning paradigms that train UE models with enhanced privacy, each offering different trade-offs between noise robustness and learning speed. To mitigate their respective weaknesses, we propose a hybrid federated learning (HFL) framework in which each user equipment (UE) transmits either gradients or logits, and the base station (BS) selects the per-round weights of FL and FD updates. We derive convergence of HFL framework and introduce two methods to exploit degrees of freedom (DoF) in HFL, which are (i) adaptive UE clustering via Jenks optimization and (ii) adaptive weight selection via a damped Newton method. Numerical results show that HFL achieves superior test accuracy at low SNR when both DoF are exploited.
翻译:联邦学习(FL)与联邦蒸馏(FD)是两种分布式学习范式,能够在增强隐私保护的同时训练用户设备(UE)模型,二者在噪声鲁棒性与学习速度之间各有不同的权衡。为克服各自的局限性,本文提出一种混合联邦学习(HFL)框架:在该框架中,每个用户设备(UE)可选择传输梯度或逻辑值,而基站(BS)则负责为每轮训练中FL与FD的更新量分配权重。我们推导了HFL框架的收敛性,并引入两种方法来利用HFL中的自由度(DoF):(一)通过Jenks优化实现自适应用户设备聚类;(二)通过阻尼牛顿法实现自适应权重选择。数值实验表明,当同时利用两种自由度时,HFL在低信噪比(SNR)条件下能够取得更优的测试精度。