Large Language Models (LLMs) have demonstrated extraordinary capabilities and contributed to multiple fields, such as generating and summarizing text, language translation, and question-answering. Nowadays, LLM is becoming a very popular tool in computerized language processing tasks, with the capability to analyze complicated linguistic patterns and provide relevant and appropriate responses depending on the context. While offering significant advantages, these models are also vulnerable to security and privacy attacks, such as jailbreaking attacks, data poisoning attacks, and Personally Identifiable Information (PII) leakage attacks. This survey provides a thorough review of the security and privacy challenges of LLMs for both training data and users, along with the application-based risks in various domains, such as transportation, education, and healthcare. We assess the extent of LLM vulnerabilities, investigate emerging security and privacy attacks for LLMs, and review the potential defense mechanisms. Additionally, the survey outlines existing research gaps in this domain and highlights future research directions.
翻译:大型语言模型(LLMs)已展现出卓越的能力,并在文本生成与摘要、语言翻译、问答系统等多个领域做出贡献。如今,LLM 正成为计算机化语言处理任务中极为流行的工具,能够分析复杂的语言模式,并根据上下文提供相关且恰当的回答。尽管这些模型具有显著优势,它们也容易受到安全与隐私攻击,例如越狱攻击、数据投毒攻击以及个人可识别信息(PII)泄露攻击。本综述全面审视了 LLMs 在训练数据和用户层面面临的安全与隐私挑战,以及其在交通、教育、医疗等多个领域基于应用的风险。我们评估了 LLM 脆弱性的程度,探究了针对 LLMs 新兴的安全与隐私攻击,并回顾了潜在的防御机制。此外,本综述还指出了该领域现有的研究空白,并强调了未来的研究方向。