Integrating Large AI Models (LAMs) into 6G mobile networks is a key enabler of the AI-Native Air Interface (AI-AI), where protocol intelligence must scale beyond handcrafted logic. This paper presents, to our knowledge, the first standards-compliant emulation of the Radio Resource Control (RRC) layer using a decoder-only LAM (LLAMA-class) fine-tuned with Low-Rank Adaptation (LoRA) on a multi-vendor corpus of real-world traces spanning both 5G and 4G systems. We treat RRC as a domain-specific language and construct a segmentation-safe, question-answer (Question-and-Answer (QA)) dataset that preserves Abstract Syntax Notation (ASN.1) structure through linearization prior to Byte Pair Encoding (BPE) tokenization. The proposed approach combines parameter-efficient adaptation with schema-bounded prompting to ensure syntactic and procedural fidelity. Evaluation introduces a standards-aware triad -- ASN.1 conformance, field-level coverage analysis, and uplink-to-downlink state-machine checks -- alongside semantic similarity and latency profiling across 120 configurations. On 30k 5G request-response pairs plus an additional 4.8k QA turns from 4G sessions, our 8B model achieves a median cosine similarity of 0.97, a 61% relative gain over a zero-shot baseline, while sustaining high conformance rates. These results demonstrate that LAMs, when augmented with protocol-aware reasoning, can directly orchestrate control-plane procedures, laying the foundation for the future Artificial Intelligence (AI)-native Radio Access Network (RAN).
翻译:将大型人工智能模型集成到6G移动网络是实现AI原生空口的关键使能技术,其中协议智能必须超越手工设计的逻辑。据我们所知,本文首次提出了基于标准兼容的无线资源控制层仿真,该仿真采用仅解码器架构的大型AI模型(LLAMA类),通过在涵盖5G和4G系统的多厂商实际跟踪语料库上使用低秩自适应进行微调。我们将RRC视为领域特定语言,构建了分割安全的问答数据集,该数据集在字节对编码分词前通过线性化保留了抽象语法记法结构。所提出的方法结合了参数高效自适应与模式边界提示,以确保句法和过程保真度。评估引入了一个标准感知的三元组——ASN.1一致性、字段级覆盖分析和上行至下行状态机检查——同时结合了120种配置下的语义相似性和延迟分析。在3万对5G请求-响应数据及来自4G会话的额外4800轮问答数据上,我们的80亿参数模型实现了0.97的中位余弦相似度,相比零样本基线获得61%的相对提升,同时保持高一致性率。这些结果表明,当增强协议感知推理能力时,大型AI模型能够直接编排控制平面流程,为未来AI原生无线接入网奠定基础。