Augmenting language models with image inputs may enable more effective jailbreak attacks through continuous optimization, unlike text inputs that require discrete optimization. However, new multimodal fusion models tokenize all input modalities using non-differentiable functions, which hinders straightforward attacks. In this work, we introduce the notion of a tokenizer shortcut that approximates tokenization with a continuous function and enables continuous optimization. We use tokenizer shortcuts to create the first end-to-end gradient image attacks against multimodal fusion models. We evaluate our attacks on Chameleon models and obtain jailbreak images that elicit harmful information for 72.5% of prompts. Jailbreak images outperform text jailbreaks optimized with the same objective and require 3x lower compute budget to optimize 50x more input tokens. Finally, we find that representation engineering defenses, like Circuit Breakers, trained only on text attacks can effectively transfer to adversarial image inputs.
翻译:与需要离散优化的文本输入不同,通过图像输入增强语言模型可能通过连续优化实现更有效的越狱攻击。然而,新的多模态融合模型使用不可微函数对所有输入模态进行分词,这阻碍了直接的攻击。在本工作中,我们引入了分词器捷径的概念,它用连续函数近似分词过程,从而实现了连续优化。我们利用分词器捷径创建了首个针对多模态融合模型的端到端梯度图像攻击。我们在Chameleon模型上评估了我们的攻击,并获得了针对72.5%的提示能诱导出有害信息的越狱图像。越狱图像的表现优于使用相同目标优化的文本越狱,并且在优化50倍更多输入词元时所需的计算预算降低了3倍。最后,我们发现仅针对文本攻击训练的表示工程防御方法(如Circuit Breakers)可以有效地迁移到对抗性图像输入上。