Proactive and agentic control in Sixth-Generation (6G) Open Radio Access Networks (O-RAN) requires control-grade prediction under stringent Near-Real-Time (Near-RT) latency and computational constraints. While Transformer-based models are effective for sequence modeling, their quadratic complexity limits scalability in Near-RT RAN Intelligent Controller (RIC) analytics. This paper investigates a post-Transformer design paradigm for efficient radio telemetry forecasting. We propose a quantum-inspired many-body state-space tensor network that replaces self-attention with stable structured state-space dynamics kernels, enabling linear-time sequence modeling. Tensor-network factorizations in the form of Tensor Train (TT) / Matrix Product State (MPS) representations are employed to reduce parameterization and data movement in both input projections and prediction heads, while lightweight channel gating and mixing layers capture non-stationary cross-Key Performance Indicator (KPI) dependencies. The proposed model is instantiated as an agentic perceive-predict xApp and evaluated on a bespoke O-RAN KPI time-series dataset comprising 59,441 sliding windows across 13 KPIs, using Reference Signal Received Power (RSRP) forecasting as a representative use case. Our proposed Linear Quantum-Inspired State-Space (LiQSS) model is 10.8x-15.8x smaller and approximately 1.4x faster than prior structured state-space baselines. Relative to Transformer-based models, LiQSS achieves up to a 155x reduction in parameter count and up to 2.74x faster inference, without sacrificing forecasting accuracy.
翻译:第六代(6G)开放无线接入网(O-RAN)中的主动与自主控制,要求在严格的近实时(Near-RT)延迟与计算约束下实现控制级预测。尽管基于Transformer的模型在序列建模中表现有效,但其二次复杂度限制了其在近实时无线接入网智能控制器(RIC)分析中的可扩展性。本文研究了一种用于高效无线遥测预测的后Transformer设计范式。我们提出了一种量子启发的多体状态空间张量网络,该网络用稳定的结构化状态空间动态核取代了自注意力机制,实现了线性时间复杂度的序列建模。通过采用张量链(TT)/矩阵乘积态(MPS)表示形式的张量网络分解,减少了输入投影和预测头中的参数量与数据移动,同时轻量级的通道门控与混合层捕获了非平稳的跨关键性能指标(KPI)依赖关系。所提出的模型被实例化为一个自主的感知-预测xApp,并在一个包含13个KPI、59,441个滑动窗口的定制O-RAN KPI时间序列数据集上进行了评估,其中以参考信号接收功率(RSRP)预测作为代表性用例。我们提出的线性量子启发状态空间(LiQSS)模型比先前的结构化状态空间基线模型小10.8-15.8倍,且速度提升约1.4倍。相较于基于Transformer的模型,LiQSS在参数数量上实现了高达155倍的减少,推理速度最高提升2.74倍,且未牺牲预测精度。