Distributed machine learning (ML) over wireless networks hinges on accurate channel state information (CSI) and efficient exchange of high-dimensional model updates. These demands are governed by channel coherence time and bandwidth, which vary across devices (links) due to heterogeneous mobility and scattering, causing degraded downlink delivery and distorted uplink over-the-air (OTA) aggregation. We propose a coherence-aware federated learning (FL) framework that jointly addresses impairments on downlink and uplink with communication-efficient strategies. In the downlink, we employ product superposition to multiplex global model symbols for long-coherence (static) devices onto the pilot tones required by short-coherence (dynamic) devices for channel estimation, turning pilot overhead into payload while preserving estimation fidelity. In the proposed scheme, an orthogonal frequency-division multiplexing (OFDM) super-block is partitioned into sub-blocks aligned with the smallest coherence time and bandwidth, enabling consistent channel estimation and stabilizing OTA aggregation across heterogeneous devices. Partial model reception at dynamic devices is mitigated via previous local model filling (PLMF), which reuses prior updates. We establish convergence guarantees under heterogeneous link impairments, imperfect CSI, and aggregation noise. The proposed framework enables efficient scheduling under coherence heterogeneity; analysis and experiments demonstrate notable gains in communication efficiency, latency, and learning accuracy over conventional FL baselines.
翻译:无线网络中的分布式机器学习依赖于精确的信道状态信息和高效的高维模型更新交换。这些需求受信道相干时间和带宽的制约,而由于异构的移动性和散射环境,不同设备(链路)间的相干时间和带宽存在差异,导致下行链路传输性能下降和上行链路空中聚合失真。我们提出了一种相干感知的联邦学习框架,该框架通过通信高效的策略联合应对下行和上行链路的损伤。在下行链路中,我们采用乘积叠加技术,将长相干(静态)设备的全局模型符号复用到短相干(动态)设备进行信道估计所需的导频子载波上,从而在保持估计精度的同时将导频开销转化为有效载荷。在所提方案中,一个正交频分复用超块被划分为与最小相干时间和带宽对齐的子块,从而实现跨异构设备的一致信道估计和稳定的空中聚合。动态设备的部分模型接收问题通过先前本地模型填充技术来缓解,该技术重用先前的更新。我们在异构链路损伤、不完美信道状态信息和聚合噪声的条件下建立了收敛性保证。所提框架支持在相干性异构下的高效调度;分析与实验表明,相较于传统联邦学习基线,该框架在通信效率、时延和学习精度方面均取得了显著提升。