Proactive and agentic control in Sixth-Generation (6G) Open Radio Access Networks (O-RAN) requires control-grade prediction under stringent Near-Real-Time (Near-RT) latency and computational constraints. While Transformer-based models are effective for sequence modeling, their quadratic complexity limits scalability in Near-RT RAN Intelligent Controller (RIC) analytics. This paper investigates a post-Transformer design paradigm for efficient radio telemetry forecasting. We propose a quantum-inspired many-body state-space tensor network that replaces self-attention with stable structured state-space dynamics kernels, enabling linear-time sequence modeling. Tensor-network factorizations in the form of Tensor Train (TT) / Matrix Product State (MPS) representations are employed to reduce parameterization and data movement in both input projections and prediction heads, while lightweight channel gating and mixing layers capture non-stationary cross-Key Performance Indicator (KPI) dependencies. The proposed model is instantiated as an agentic perceive-predict xApp and evaluated on a bespoke O-RAN KPI time-series dataset comprising 59,441 sliding windows across 13 KPIs, using Reference Signal Received Power (RSRP) forecasting as a representative use case. Our proposed Linear Quantum-Inspired State-Space (LiQSS) model is 10.8x-15.8x smaller and approximately 1.4x faster than prior structured state-space baselines. Relative to Transformer-based models, LiQSS achieves up to a 155x reduction in parameter count and up to 2.74x faster inference, without sacrificing forecasting accuracy.
翻译:第六代(6G)开放无线接入网(O-RAN)中的主动与自主控制,需要在严格的近实时(Near-RT)延迟与计算约束下实现控制级预测。尽管基于Transformer的模型在序列建模中表现有效,但其二次复杂度限制了其在近实时无线接入网智能控制器(RIC)分析中的可扩展性。本文研究了一种用于高效无线遥测预测的后Transformer设计范式。我们提出了一种量子启发的多体状态空间张量网络,该网络以稳定的结构化状态空间动态核取代自注意力机制,实现了线性时间复杂度的序列建模。通过采用张量链(TT)/矩阵乘积态(MPS)形式的张量网络分解,在输入投影和预测头中同时减少了参数量与数据移动开销,同时轻量级的通道门控与混合层能够捕捉非平稳的跨关键性能指标(KPI)依赖关系。所提出的模型被实例化为一个自主感知预测xApp,并在一个包含13个KPI、59,441个滑动窗口的定制O-RAN KPI时间序列数据集上进行评估,其中以参考信号接收功率(RSRP)预测作为代表性用例。我们提出的线性量子启发式状态空间(LiQSS)模型相较于先前的结构化状态空间基线模型,参数量减少了10.8至15.8倍,推理速度提升了约1.4倍。与基于Transformer的模型相比,LiQSS在保持预测精度的同时,实现了高达155倍的参数量减少和高达2.74倍的推理加速。