Cell-free massive multi-input multi-output (MIMO) promises uniform high performance across the network, but also brings a high energy cost due to joint transmission from distributed radio units (RUs) and centralized processing in the cloud. Leveraging the resource-sharing capabilities of Open Radio Access Network (O-RAN), we propose EARL, an energy-aware adaptive antenna control framework based on reinforcement learning. EARL dynamically configures antenna elements in RUs to minimize radio, optical fronthaul, and cloud processing power consumption while meeting user spectral efficiency demands. Numerical results show power savings of up to 81% and 50% over full-on and heuristic baselines, respectively. The RL-based approach operates within 220 ms, satisfying O-RAN's near-real-time limit, and a greedy refinement further halves power consumption at a 2 s runtime.
翻译:无蜂窝大规模多输入多输出(MIMO)技术有望在整个网络中提供均匀的高性能,但由于分布式射频单元(RU)的联合传输和云端的集中处理,也带来了高昂的能耗成本。利用开放无线接入网络(O-RAN)的资源共享能力,我们提出了EARL,一种基于强化学习的能量感知自适应天线控制框架。EARL动态配置RU中的天线单元,以在满足用户频谱效率需求的同时,最小化射频、光前传和云端处理的功耗。数值结果表明,与全开天线和启发式基线方法相比,EARL分别实现了高达81%和50%的节能效果。该基于强化学习的方法在220毫秒内完成决策,满足O-RAN的近实时性限制;而一种贪婪优化策略在2秒的运行时间内可进一步将功耗减半。