The integration of Unmanned Aerial Vehicles (UAVs) into Open Radio Access Networks (O-RAN) enhances communication in disaster management and Search and Rescue (SAR) operations by ensuring connectivity when infrastructure fails. However, SAR scenarios demand stringent security and low-latency communication, as delays or breaches can compromise mission success. While UAVs serve as mobile relays, they introduce challenges in energy consumption and resource management, necessitating intelligent allocation strategies. Existing UAV-assisted O-RAN approaches often overlook the joint optimization of security, latency, and energy efficiency in dynamic environments. This paper proposes a novel Reinforcement Learning (RL)-based framework for dynamic resource allocation in UAV relays, explicitly addressing these trade-offs. Our approach formulates an optimization problem that integrates security-aware resource allocation, latency minimization, and energy efficiency, which is solved using RL. Unlike heuristic or static methods, our framework adapts in real-time to network dynamics, ensuring robust communication. Simulations demonstrate superior performance compared to heuristic baselines, achieving enhanced security and energy efficiency while maintaining ultra-low latency in SAR scenarios.
翻译:将无人机(UAV)集成到开放无线接入网络(O-RAN)中,通过确保基础设施失效时的连接性,增强了灾害管理和搜救(SAR)行动中的通信能力。然而,SAR场景要求严格的安全性和低延迟通信,因为延迟或安全漏洞可能危及任务成功。虽然无人机可作为移动中继,但它们带来了能耗和资源管理方面的挑战,需要智能分配策略。现有的无人机辅助O-RAN方法往往忽视了动态环境中安全性、延迟和能源效率的联合优化。本文提出了一种新颖的基于强化学习(RL)的框架,用于无人机中继的动态资源分配,明确解决了这些权衡问题。我们的方法构建了一个优化问题,该问题集成了安全感知资源分配、延迟最小化和能源效率,并使用RL进行求解。与启发式或静态方法不同,我们的框架能够实时适应网络动态,确保稳健的通信。仿真结果表明,与启发式基线相比,该框架在SAR场景中实现了优越的性能,在保持超低延迟的同时,增强了安全性和能源效率。