Large Language Models (LLMs) have revolutionized artificial intelligence and machine learning through their advanced text processing and generating capabilities. However, their widespread deployment has raised significant safety and reliability concerns. Established vulnerabilities in deep neural networks, coupled with emerging threat models, may compromise security evaluations and create a false sense of security. Given the extensive research in the field of LLM security, we believe that summarizing the current state of affairs will help the research community better understand the present landscape and inform future developments. This paper reviews current research on LLM vulnerabilities and threats, and evaluates the effectiveness of contemporary defense mechanisms. We analyze recent studies on attack vectors and model weaknesses, providing insights into attack mechanisms and the evolving threat landscape. We also examine current defense strategies, highlighting their strengths and limitations. By contrasting advancements in attack and defense methodologies, we identify research gaps and propose future directions to enhance LLM security. Our goal is to advance the understanding of LLM safety challenges and guide the development of more robust security measures.
翻译:大语言模型(LLMs)凭借其先进的文本处理与生成能力,已对人工智能和机器学习领域带来革命性变革。然而,其广泛部署引发了显著的安全性与可靠性担忧。深度神经网络中既有的脆弱性,结合新兴的威胁模型,可能危及安全评估并造成虚假的安全感。鉴于LLM安全领域已有大量研究,我们认为总结当前现状将有助于研究界更好地理解当前格局并指导未来发展。本文综述了当前关于LLM脆弱性与威胁的研究,并评估了现有防御机制的有效性。我们分析了针对攻击向量与模型弱点的最新研究,深入剖析了攻击机制与不断演变的威胁态势。同时,我们考察了当前的防御策略,着重指出其优势与局限。通过对比攻击与防御方法的最新进展,我们识别出研究空白,并为增强LLM安全性提出未来研究方向。我们的目标是深化对LLM安全挑战的理解,并指导开发更鲁棒的安全防护措施。