The deployment of large-scale neural networks within the Open Radio Access Network (O-RAN) architecture is pivotal for enabling native edge intelligence. However, this paradigm faces two critical bottlenecks: the prohibitive memory footprint required for local training on resource-constrained gNBs, and the saturation of bandwidth-limited backhaul links during the global aggregation of high-dimensional model updates. To address these challenges, we propose CoCo-Fed, a novel Compression and Combination-based Federated learning framework that unifies local memory efficiency and global communication reduction. Locally, CoCo-Fed breaks the memory wall by performing a double-dimension down-projection of gradients, adapting the optimizer to operate on low-rank structures without introducing additional inference parameters/latency. Globally, we introduce a transmission protocol based on orthogonal subspace superposition, where layer-wise updates are projected and superimposed into a single consolidated matrix per gNB, drastically reducing the backhaul traffic. Beyond empirical designs, we establish a rigorous theoretical foundation, proving the convergence of CoCo-Fed even under unsupervised learning conditions suitable for wireless sensing tasks. Extensive simulations on an angle-of-arrival estimation task demonstrate that CoCo-Fed significantly outperforms state-of-the-art baselines in both memory and communication efficiency while maintaining robust convergence under non-IID settings.
翻译:在开放无线接入网(O-RAN)架构中部署大规模神经网络是实现原生边缘智能的关键。然而,该范式面临两大瓶颈:资源受限的gNB在本地训练时所需的内存占用过高,以及高维模型更新在全局聚合期间导致带宽受限的回程链路饱和。为应对这些挑战,我们提出CoCo-Fed——一种基于压缩与组合的新型联邦学习框架,它统一实现了本地内存效率与全局通信开销的降低。在本地,CoCo-Fed通过对梯度进行双维度降维投影,使优化器能够直接在低秩结构上运行,且不引入额外的推理参数或延迟,从而突破内存墙限制。在全局层面,我们提出一种基于正交子空间叠加的传输协议,将逐层更新投影并叠加为每个gNB的单一整合矩阵,从而大幅减少回程流量。除经验性设计外,我们建立了严格的理论基础,证明了CoCo-Fed即使在适用于无线感知任务的无监督学习条件下仍能保证收敛。基于到达角估计任务的大量仿真表明,CoCo-Fed在内存与通信效率上均显著优于现有先进基线,并在非独立同分布数据设置下保持稳健的收敛性。