The fifth-generation (5G) offers advanced services, supporting applications such as intelligent transportation, connected healthcare, and smart cities within the Internet of Things (IoT). However, these advancements introduce significant security challenges, with increasingly sophisticated cyber-attacks. This paper proposes a robust intrusion detection system (IDS) using federated learning and large language models (LLMs). The core of our IDS is based on BERT, a transformer model adapted to identify malicious network flows. We modified this transformer to optimize performance on edge devices with limited resources. Experiments were conducted in both centralized and federated learning contexts. In the centralized setup, the model achieved an inference accuracy of 97.79%. In a federated learning context, the model was trained across multiple devices using both IID (Independent and Identically Distributed) and non-IID data, based on various scenarios, ensuring data privacy and compliance with regulations. We also leveraged linear quantization to compress the model for deployment on edge devices. This reduction resulted in a slight decrease of 0.02% in accuracy for a model size reduction of 28.74%. The results underscore the viability of LLMs for deployment in IoT ecosystems, highlighting their ability to operate on devices with constrained computational and storage resources.
翻译:第五代(5G)移动通信技术提供了先进的服务,支持物联网(IoT)中的智能交通、互联医疗和智慧城市等应用。然而,这些进步也带来了严峻的安全挑战,网络攻击日益复杂。本文提出了一种利用联邦学习和大型语言模型(LLMs)的鲁棒入侵检测系统(IDS)。该IDS的核心基于Transformer模型BERT,并对其进行了适配以识别恶意网络流量。我们改进了这一Transformer模型,以优化其在资源受限的边缘设备上的性能。实验在集中式和联邦学习两种环境下进行。在集中式设置中,模型的推理准确率达到97.79%。在联邦学习环境下,模型基于多种场景,使用独立同分布(IID)和非独立同分布(non-IID)数据在多个设备上进行训练,确保了数据隐私和法规遵从性。我们还利用线性量化技术对模型进行压缩,以便在边缘设备上部署。模型大小减少了28.74%,而准确率仅略微下降了0.02%。这些结果证明了大型语言模型在物联网生态系统中部署的可行性,突显了其在计算和存储资源受限的设备上运行的能力。