This paper provides a systematic survey of jailbreak attacks and defenses on Large Language Models (LLMs) and Vision-Language Models (VLMs), emphasizing that jailbreak vulnerabilities stem from structural factors such as incomplete training data, linguistic ambiguity, and generative uncertainty. It further differentiates between hallucinations and jailbreaks in terms of intent and triggering mechanisms. We propose a three-dimensional survey framework: (1) Attack dimension-including template/encoding-based, in-context learning manipulation, reinforcement/adversarial learning, LLM-assisted and fine-tuned attacks, as well as prompt- and image-level perturbations and agent-based transfer in VLMs; (2) Defense dimension-encompassing prompt-level obfuscation, output evaluation, and model-level alignment or fine-tuning; and (3) Evaluation dimension-covering metrics such as Attack Success Rate (ASR), toxicity score, query/time cost, and multimodal Clean Accuracy and Attribute Success Rate. Compared with prior works, this survey spans the full spectrum from text-only to multimodal settings, consolidating shared mechanisms and proposing unified defense principles: variant-consistency and gradient-sensitivity detection at the perception layer, safety-aware decoding and output review at the generation layer, and adversarially augmented preference alignment at the parameter layer. Additionally, we summarize existing multimodal safety benchmarks and discuss future directions, including automated red teaming, cross-modal collaborative defense, and standardized evaluation.
翻译:本文对大型语言模型(LLMs)和视觉语言模型(VLMs)的越狱攻击与防御进行了系统性综述,强调越狱漏洞源于训练数据不完整、语言歧义性和生成不确定性等结构性因素。文章进一步从意图和触发机制上区分了幻觉与越狱现象。我们提出了一个三维综述框架:(1)攻击维度——包括基于模板/编码的方法、上下文学习操控、强化/对抗学习、LLM辅助与微调攻击,以及VLMs中的提示级与图像级扰动和基于智能体的迁移攻击;(2)防御维度——涵盖提示级混淆、输出评估和模型级对齐或微调;(3)评估维度——包含攻击成功率(ASR)、毒性分数、查询/时间成本,以及多模态场景下的清洁准确率与属性成功率等指标。与已有研究相比,本综述覆盖了从纯文本到多模态的完整技术谱系,归纳了共性机制并提出了统一防御原则:在感知层采用变体一致性检测与梯度敏感检测,在生成层实施安全感知解码与输出审查,在参数层进行对抗增强的偏好对齐。此外,我们总结了现有多模态安全基准数据集,并探讨了未来研究方向,包括自动化红队测试、跨模态协同防御和标准化评估体系。