With the widespread deployment of Multimodal Large Language Models (MLLMs) for visual-reasoning tasks, improving their safety has become crucial. Recent research indicates that despite training-time safety alignment, these models remain vulnerable to jailbreak attacks. In this work, we first highlight an important safety gap to describe that alignment achieved solely through safety training may be insufficient against jailbreak attacks. To address this vulnerability, we propose Immune, an inference-time defense framework that leverages a safe reward model through controlled decoding to defend against jailbreak attacks. Additionally, we provide a mathematical characterization of Immune, offering insights on why it improves safety against jailbreaks. Extensive evaluations on diverse jailbreak benchmarks using recent MLLMs reveal that Immune effectively enhances model safety while preserving the model's original capabilities. For instance, against text-based jailbreak attacks on LLaVA-1.6, Immune reduces the attack success rate by 57.82% and 16.78% compared to the base MLLM and state-of-the-art defense strategy, respectively.
翻译:随着多模态大语言模型(MLLMs)在视觉推理任务中的广泛应用,提升其安全性变得至关重要。近期研究表明,尽管进行了训练时的安全对齐,这些模型仍然容易受到越狱攻击。在本工作中,我们首先强调了一个重要的安全差距,即仅通过安全训练实现的对齐可能不足以抵御越狱攻击。为应对这一脆弱性,我们提出了免疫(Immune),一种推理时防御框架,该框架通过受控解码利用安全奖励模型来防御越狱攻击。此外,我们提供了免疫的数学刻画,以阐明其为何能提升对越狱攻击的安全性。使用近期MLLMs在多样化越狱基准测试上的广泛评估表明,免疫能有效增强模型安全性,同时保持模型的原始能力。例如,针对LLaVA-1.6的基于文本的越狱攻击,与基础MLLM及最先进的防御策略相比,免疫分别将攻击成功率降低了57.82%和16.78%。