Radio Access Network (RAN) slicing enables multiple logical networks to exist on top of the same physical infrastructure by allocating resources to distinct service groups, where radio resource scheduling plays a key role in ensuring compliance with slice-specific Service-Level Agreements (SLAs). Existing configuration-based or intent-driven Reinforcement Learning (RL) approaches usually rely on static mappings and SLA conversions. The current literature does not integrate natural language understanding with coordinated decision-making. To address these limitations, we propose an Agentic AI framework for 6G RAN slicing, driven by a super agent built using Hierarchical Decision Mamba (HDM) controllers and a Large Language Model (LLM). The super agent interprets operator intents and translates them into actionable goals using the LLM, which are used by HDM to coordinate inter-slice, intra-slice, and self-healing agents. Compared to transformer-based and reward-driven baselines, the proposed Agentic AI framework demonstrates consistent improvements across key performance indicators, including higher throughput, improved cell-edge performance, and reduced latency across different slices.
翻译:无线接入网(RAN)切片通过将资源分配给不同的服务组,使得多个逻辑网络能够共存于同一物理基础设施之上,其中无线资源调度在确保符合切片特定服务等级协议(SLA)方面起着关键作用。现有的基于配置或意图驱动的强化学习(RL)方法通常依赖于静态映射和SLA转换。当前文献尚未将自然语言理解与协调决策相结合。为应对这些局限性,我们提出了一种用于6G RAN切片的智能体AI框架,该框架由一个使用分层决策Mamba(HDM)控制器和大型语言模型(LLM)构建的超级智能体驱动。该超级智能体通过LLM解析运营商意图并将其转化为可执行目标,随后由HDM利用这些目标协调切片间、切片内及自愈智能体。与基于Transformer和奖励驱动的基线方法相比,所提出的智能体AI框架在关键性能指标上均展现出持续改进,包括更高的吞吐量、更优的小区边缘性能以及跨不同切片的延迟降低。