This paper addresses the critical challenges of communication overhead, data heterogeneity, and privacy in deep learning for channel state information (CSI) feedback in massive MIMO systems. To this end, we propose Fed-PELAD, a novel federated learning framework that incorporates personalized encoders and a LoRA-adapted shared decoder. Specifically, personalized encoders are trained locally on each user equipment (UE) to capture device-specific channel characteristics, while a shared decoder is updated globally via the coordination of the base station (BS) by using Low-Rank Adaptation (LoRA). This design ensures that only compact LoRA adapter parameters instead of full model updates are transmitted for aggregation. To further enhance convergence stability, we introduce an alternating freezing strategy with calibrated learning-rate ratio during LoRA aggregation. Extensive simulations on 3GPP-standard channel models demonstrate that Fed-PELAD requires only 42.97\% of the uplink communication cost compared to conventional methods while achieving a performance gain of 1.2 dB in CSI feedback accuracy under heterogeneous conditions.
翻译:本文针对大规模MIMO系统中信道状态信息(CSI)反馈的深度学习所面临的通信开销、数据异构性和隐私保护等关键挑战,提出了一种新颖的联邦学习框架Fed-PELAD。该框架融合了个性化编码器与基于LoRA适配的共享解码器。具体而言,个性化编码器在各用户设备(UE)端本地训练,以捕捉设备特定的信道特征;而共享解码器则通过基站的协调,利用低秩自适应(LoRA)技术进行全局更新。这一设计确保仅需传输紧凑的LoRA适配器参数而非完整模型更新进行聚合。为进一步提升收敛稳定性,我们在LoRA聚合过程中引入了带有校准学习率比率的交替冻结策略。基于3GPP标准信道模型的大量仿真实验表明,与传统方法相比,Fed-PELAD仅需42.97%的上行通信成本,同时在异构条件下实现了CSI反馈精度1.2 dB的性能提升。