Dynamic resource allocation in O-RAN is critical for managing the conflicting QoS requirements of 6G network slices. Conventional reinforcement learning agents often fail in this domain, as their unimodal policy structures cannot model the multi-modal nature of optimal allocation strategies. This paper introduces Diffusion Q-Learning (Diffusion-QL), a novel framework that represents the policy as a conditional diffusion model. Our approach generates resource allocation actions by iteratively reversing a noising process, with each step guided by the gradient of a learned Q-function. This method enables the policy to learn and sample from the complex distribution of near-optimal actions. Simulations demonstrate that the Diffusion-QL approach consistently outperforms state-of-the-art DRL baselines, offering a robust solution for the intricate resource management challenges in next-generation wireless networks.
翻译:在O-RAN中进行动态资源分配对于管理6G网络切片相互冲突的QoS要求至关重要。传统的强化学习智能体在此领域往往表现不佳,因为其单模态策略结构无法建模最优分配策略的多模态特性。本文提出了扩散Q学习(Diffusion-QL),这是一种新颖的框架,将策略表示为条件扩散模型。我们的方法通过迭代反转加噪过程来生成资源分配动作,每一步都由学习到的Q函数的梯度引导。这种方法使策略能够从接近最优动作的复杂分布中学习和采样。仿真结果表明,Diffusion-QL方法持续优于最先进的深度强化学习基线,为下一代无线网络中复杂的资源管理挑战提供了一个鲁棒的解决方案。