As the Metaverse envisions deeply immersive and pervasive connectivity in 6G networks, Integrated Access and Backhaul (IAB) emerges as a critical enabler to meet the demanding requirements of massive and immersive communications. IAB networks offer a scalable solution for expanding broadband coverage in urban environments. However, optimizing IAB node deployment to ensure reliable coverage while minimizing costs remains challenging due to location constraints and the dynamic nature of cities. Existing heuristic methods, such as Greedy Algorithms, have been employed to address these optimization problems. This work presents a novel Deep Reinforcement Learning ( DRL) approach for IAB network planning, tailored to future 6G scenarios that seek to support ultra-high data rates and dense device connectivity required by immersive Metaverse applications. We utilize Deep Q-Network (DQN) with action elimination and integrate DQN, Double Deep Q-Network ( DDQN), and Dueling DQN architectures to effectively manage large state and action spaces. Simulations with various initial donor configurations demonstrate the effectiveness of our DRL approach, with Dueling DQN reducing node count by an average of 12.3% compared to traditional heuristics. The study underscores how advanced DRL techniques can address complex network planning challenges in 6G-enabled Metaverse contexts, providing an efficient and adaptive solution for IAB deployment in diverse urban environments.
翻译:随着元宇宙在6G网络中构想深度沉浸式与泛在连接,集成接入与回传(IAB)成为满足海量沉浸式通信严苛要求的关键使能技术。IAB网络为扩展城市环境中的宽带覆盖提供了可扩展的解决方案。然而,由于位置约束和城市动态特性,优化IAB节点部署以确保可靠覆盖同时最小化成本仍具挑战性。现有启发式方法(如贪心算法)已被用于解决此类优化问题。本研究提出一种面向IAB网络规划的新型深度强化学习(DRL)方法,专为未来6G场景定制,旨在支持沉浸式元宇宙应用所需的超高数据速率与密集设备连接。我们采用带动作消除机制的深度Q网络(DQN),并整合DQN、双重深度Q网络(DDQN)及竞争架构DQN(Dueling DQN)以有效管理大规模状态与动作空间。多种初始施主节点配置下的仿真实验验证了所提DRL方法的有效性,其中Dueling DQN相较于传统启发式方法平均减少12.3%的节点数量。本研究阐明了先进DRL技术如何应对6G赋能的元宇宙场景中复杂的网络规划挑战,为多样化城市环境下的IAB部署提供了高效自适应解决方案。