With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems is envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are immensely popular in service layer applications and have been proposed as essential enablers in many aspects of 5G and beyond networks, from IoT devices and edge computing to cloud-based infrastructures. However, existing 5G ML-based security surveys tend to emphasize AI/ML model performance and accuracy more than the models' accountability and trustworthiness. In contrast, this paper explores the potential of Explainable AI (XAI) methods, which would allow stakeholders in 5G and beyond to inspect intelligent black-box systems used to secure next-generation networks. The goal of using XAI in the security domain of 5G and beyond is to allow the decision-making processes of ML-based security systems to be transparent and comprehensible to 5G and beyond stakeholders, making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as ORAN, zero-touch network management, and end-to-end slicing, this survey emphasizes the role of XAI in them that the general users would ultimately enjoy. Furthermore, we presented the lessons from recent efforts and future research directions on top of the currently conducted projects involving XAI.
翻译:随着5G商业化的到来,下一代超5G(B5G)无线接入技术对更可靠、更快速、更智能的通信系统提出了明确需求。人工智能(AI)与机器学习(ML)在服务层应用中已极为普及,并被提议作为5G及未来网络多方面(从物联网设备与边缘计算到云基础设施)的关键使能技术。然而,现有基于机器学习的5G安全综述往往更侧重于AI/ML模型的性能与准确性,而相对忽视模型的可问责性与可信度。相比之下,本文探讨了可解释人工智能(XAI)方法的潜力,该方法将使得5G及未来网络的利益相关方能够审查用于保护下一代网络的智能黑盒系统。在5G及未来网络的安全领域应用XAI,旨在使基于机器学习的安全系统的决策过程对利益相关方透明且可理解,从而确保系统对自动化行动负责。在即将到来的B5G时代的各个方面,包括ORAN、零接触网络管理与端到端切片等B5G技术,本综述着重阐述了XAI在其中所扮演的角色——这些技术最终将为普通用户所受益。此外,我们在当前已开展的XAI相关项目基础上,总结了近期研究工作的经验教训,并展望了未来的研究方向。