Internet of Things (IoT) devices have become increasingly ubiquitous with applications not only in urban areas but remote areas as well. These devices support industries such as agriculture, forestry, and resource extraction. Due to the device location being in remote areas, satellites are frequently used to collect and deliver IoT device data to customers. As these devices become increasingly advanced and numerous, the amount of data produced has rapidly increased potentially straining the ability for radio frequency (RF) downlink capacity. Free space optical communications with their wide available bandwidths and high data rates are a potential solution, but these communication systems are highly vulnerable to weather-related disruptions. This results in certain communication opportunities being inefficient in terms of the amount of data received versus the power expended. In this paper, we propose a deep reinforcement learning (DRL) method using Deep Q-Networks that takes advantage of weather condition forecasts to improve energy efficiency while delivering the same number of packets as schemes that don't factor weather into routing decisions. We compare this method with simple approaches that utilize simple cloud cover thresholds to improve energy efficiency. In testing the DRL approach provides improved median energy efficiency without a significant reduction in median delivery ratio. Simple cloud cover thresholds were also found to be effective but the thresholds with the highest energy efficiency had reduced median delivery ratio values.
翻译:物联网(IoT)设备已日益普及,其应用不仅限于城市地区,也广泛分布于偏远区域。这些设备为农业、林业和资源开采等行业提供支持。由于设备通常位于偏远地区,卫星常被用于收集物联网设备数据并传输给用户。随着这些设备日益先进且数量激增,所产生的数据量迅速增长,可能对射频(RF)下行链路的容量造成压力。自由空间光通信凭借其广阔的可用带宽和高速率成为潜在的解决方案,但此类通信系统极易受到天气相关干扰的影响。这导致某些通信机会在接收数据量与能耗比方面效率低下。本文提出一种基于深度Q网络的深度强化学习(DRL)方法,该方法利用天气预报信息,在保持与不考虑天气因素的路由方案相同数据包传输量的前提下,提升能量效率。我们将该方法与采用简单云量阈值提升能量效率的基准方案进行比较。测试表明,DRL方法在未显著降低中位数传输成功率的同时,提高了中位数能量效率。研究同时发现,简单的云量阈值方法虽有效,但能量效率最高的阈值设置会导致中位数传输成功率下降。