With the emergence of new application areas such as cyber-physical systems and human-in-the-loop applications ensuring a specific level of end-to-end network latency with high reliability (e.g., 99.9%) is becoming increasingly critical. To align wireless links with these reliability requirements, it is essential to analyze and control network latency in terms of its full probability distribution. However, in a wireless link, the distribution may vary over time, making this task particularly challenging. We propose predicting the latency distribution using state-of-the-art data-driven techniques that leverage historical network information. Our approach tokenizes network state information and processes it using temporal deep-learning architectures-namely LSTM and Transformer models-to capture both short- and long-term delay dependencies. These models output parameters for a chosen parametric density via a mixture density network with Gaussian mixtures, yielding multi-step probabilistic forecasts of future delays. To validate our proposed approach, we implemented and tested these methods using a time-synchronized, SDR-based OpenAirInterface 5G testbed to collect and preprocess network-delay data. Our experiments show that the Transformer model achieves lower negative log-likelihood and mean absolute error than both LSTM and feed-forward baselines in challenging scenarios, while also providing insights into model complexity and training/inference overhead. This framework enables more informed decision-making for adaptive scheduling and resource allocation, paving the way toward enhanced QoS in evolving 5G and 6G networks.
翻译:随着信息物理系统和人机交互应用等新兴应用领域的出现,确保特定水平的端到端网络延迟具备高可靠性(例如99.9%)正变得日益关键。为使无线链路符合这些可靠性要求,必须从完整概率分布的角度分析和控制网络延迟。然而,在无线链路中,延迟分布可能随时间变化,这使得该任务尤为困难。我们提出利用基于历史网络信息的先进数据驱动技术来预测延迟分布。我们的方法对网络状态信息进行标记化处理,并使用时序深度学习架构——即LSTM和Transformer模型——进行处理,以捕捉短期与长期的延迟依赖关系。这些模型通过具有高斯混合分布的混合密度网络,输出选定参数化密度的参数,从而生成未来延迟的多步概率预测。为验证所提方法,我们基于时间同步的软件定义无线电OpenAirInterface 5G测试平台实现并测试了这些方法,以收集和预处理网络延迟数据。实验表明,在复杂场景下,Transformer模型相比LSTM和前馈基线模型取得了更低的负对数似然和平均绝对误差,同时为模型复杂度及训练/推理开销提供了深入分析。该框架为自适应调度与资源分配提供了更可靠的决策依据,为提升演进中的5G及6G网络服务质量开辟了道路。