In different situations, like disaster communication and network connectivity for rural locations, unmanned aerial vehicles (UAVs) could indeed be utilized as airborne base stations to improve both the functionality and coverage of communication networks. Ground users can employ mobile UAVs to establish communication channels and deliver packages. UAVs, on the other hand, have restricted transmission capabilities and fuel supplies. They can't always cover the full region or continue to fly for a long time, especially in a huge territory. Controlling a swarm of UAVs to yield a relatively long communication coverage while maintaining connectivity and limiting energy usage is so difficult. We use modern deep reinforcement learning (DRL) for UAV connectivity to provide an innovative and extremely energy-efficient DRL-based algorithm. The proposed method: 1) enhances novel energy efficiency while taking into account communications throughput, energy consumption, fairness, and connectivity; 2) evaluates the environment and its dynamics; and 3) makes judgments using strong deep neural networks. For performance evaluation, we have performed comprehensive simulations. In terms of energy consumption and fairness, simulation results show that the DRL-based algorithm consistently outperforms two commonly used baseline techniques.
翻译:在灾害通信和偏远地区网络覆盖等不同场景中,无人飞行器(UAV)可作为空中基站,有效提升通信网络的性能与覆盖范围。地面用户可通过移动无人机建立通信链路并传输数据包。然而,无人机存在传输能力受限和能源供给有限的问题,特别是在广阔区域内,难以实现全域持续覆盖或长时间飞行。控制无人机集群在维持连接性并限制能耗的同时实现相对持久的通信覆盖,是一项极具挑战性的任务。本研究将现代深度强化学习(DRL)技术应用于无人机通信领域,提出了一种创新且高能效的DRL算法。该算法具备以下特点:1)在综合考虑通信吞吐量、能耗、公平性与连接性的基础上,显著提升新型能效指标;2)动态评估环境及其变化特征;3)通过强大的深度神经网络进行决策。我们进行了全面的仿真实验以评估算法性能。仿真结果表明,在能耗与公平性指标方面,所提出的DRL算法持续优于两种常用基线方法。