With the widespread deployment of Multimodal Large Language Models (MLLMs) for visual-reasoning tasks, improving their safety has become crucial. Recent research indicates that despite training-time safety alignment, these models remain vulnerable to jailbreak attacks: carefully crafted image-prompt pairs that compel the model to generate harmful content. In this work, we first highlight a critical safety gap, demonstrating that alignment achieved solely through safety training may be insufficient against jailbreak attacks. To address this vulnerability, we propose Immune, an inference-time defense framework that leverages a safe reward model during decoding to defend against jailbreak attacks. Additionally, we provide a rigorous mathematical characterization of Immune, offering provable guarantees against jailbreaks. Extensive evaluations on diverse jailbreak benchmarks using recent MLLMs reveal that Immune effectively enhances model safety while preserving the model's original capabilities. For instance, against text-based jailbreak attacks on LLaVA-1.6, Immune reduces the attack success rate by 57.82% and 16.78% compared to the base MLLM and state-of-the-art defense strategy, respectively.
翻译:随着多模态大语言模型(MLLMs)在视觉推理任务中的广泛部署,提升其安全性变得至关重要。近期研究表明,尽管进行了训练时的安全对齐,这些模型在面对越狱攻击时仍然脆弱:精心设计的图像-提示词对会迫使模型生成有害内容。在本工作中,我们首先揭示了一个关键的安全漏洞,证明仅通过安全训练实现的对齐可能不足以抵御越狱攻击。为应对此漏洞,我们提出了 Immune,一种推理时防御框架,该框架在解码过程中利用一个安全奖励模型来防御越狱攻击。此外,我们为 Immune 提供了严格的数学刻画,给出了可证明的抵御越狱攻击的保证。基于近期 MLLMs 在多样化越狱基准测试上的广泛评估表明,Immune 能有效增强模型安全性,同时保持模型的原有能力。例如,在 LLaVA-1.6 上针对基于文本的越狱攻击,Immune 相较于基础 MLLM 和最先进的防御策略,分别将攻击成功率降低了 57.82% 和 16.78%。