Emerging AI/ML techniques have been showing great potential in automating network control in open radio access networks (Open RAN). However, existing approaches heavily rely on blackbox policies parameterized by deep neural networks, which inherently lack interpretability, explainability, and transparency, and create substantial obstacles in practical network deployment. In this paper, we propose inRAN, a novel interpretable online Bayesian learning framework for network automation in Open RAN. The core idea is to integrate interpretable surrogate models and safe optimization solvers to continually optimize control actions, while adapting to non-stationary dynamics in real-world networks. We achieve the inRAN framework with three key components: 1) an interpretable surrogate model via ensembling Kolmogorov-Arnold Networks (KANs); 2) safe optimization solvers via integrating genetic search and trust-region descent method; 3) an online dynamics tracker via continual model learning and adaptive threshold offset. We implement inRAN in an end-to-end O-RAN-compliant network testbed, and conduct extensive over-the-air experiments with the focused use case of network slicing. The results show that, inRAN substantially outperforms state-of-the-art works, by guaranteeing the chance-based constraint with a 92.67% assurance ratio with comparative resource usage throughout the online network control, under unforeseeable time-evolving network dynamics.
翻译:新兴的人工智能/机器学习技术在开放无线接入网络(Open RAN)的自动化网络控制方面展现出巨大潜力。然而,现有方法严重依赖于以深度神经网络参数化的黑盒策略,这些策略本质上缺乏可解释性、可说明性和透明度,为实际网络部署带来了重大障碍。本文提出inRAN,一种面向Open RAN网络自动化的新型可解释在线贝叶斯学习框架。其核心思想是集成可解释的代理模型与安全优化求解器,以持续优化控制动作,同时适应现实网络中非平稳的动态变化。我们通过三个关键组件实现inRAN框架:1)通过集成Kolmogorov-Arnold网络(KANs)构建的可解释代理模型;2)融合遗传搜索与信赖域下降法的安全优化求解器;3)通过持续模型学习与自适应阈值偏移实现的在线动态追踪器。我们在一个端到端符合O-RAN标准的网络测试平台中实现了inRAN,并针对网络切片这一重点用例进行了广泛的空中接口实验。结果表明,在不可预测的时变网络动态下,inRAN通过在整个在线网络控制过程中以92.67%的保障率满足基于机会的约束,并在资源使用量相当的情况下,显著优于现有最先进方案。