Recently, large language models (LLMs) have been gaining a lot of interest due to their adaptability and extensibility in emerging applications, including communication networks. It is anticipated that 6G mobile edge computing networks will be able to support LLMs as a service, as they provide ultra reliable low-latency communications and closed loop massive connectivity. However, LLMs are vulnerable to data and model privacy issues that affect the trustworthiness of LLMs to be deployed for user-based services. In this paper, we explore the security vulnerabilities associated with fine-tuning LLMs in 6G networks, in particular the membership inference attack. We define the characteristics of an attack network that can perform a membership inference attack if the attacker has access to the fine-tuned model for the downstream task. We show that the membership inference attacks are effective for any downstream task, which can lead to a personal data breach when using LLM as a service. The experimental results show that the attack success rate of maximum 92% can be achieved on named entity recognition task. Based on the experimental analysis, we discuss possible defense mechanisms and present possible research directions to make the LLMs more trustworthy in the context of 6G networks.
翻译:近年来,大型语言模型(LLMs)因其在包括通信网络在内的新兴应用中的适应性与可扩展性而备受关注。预计6G移动边缘计算网络将能够以服务形式支持LLMs,因其可提供超高可靠低时延通信与闭环海量连接。然而,LLMs易受数据和模型隐私问题的影响,这会降低其部署于用户服务时的可信度。本文探讨了在6G网络中微调LLMs时面临的安全漏洞,特别是成员推理攻击。我们定义了一种攻击网络的特性:若攻击者能够访问为下游任务微调的模型,即可实施成员推理攻击。研究表明,成员推理攻击对任何下游任务均有效,可能导致以服务形式使用LLM时发生个人数据泄露。实验结果显示,在命名实体识别任务中攻击成功率最高可达92%。基于实验分析,我们讨论了可行的防御机制,并提出了在6G网络背景下增强LLMs可信度的潜在研究方向。