Vision language models (VLMs) extend the reasoning capabilities of large language models (LLMs) to cross-modal settings, yet remain highly vulnerable to multimodal jailbreak attacks. Existing defenses predominantly rely on safety fine-tuning or aggressive token manipulations, incurring substantial training costs or significantly degrading utility. Recent research shows that LLMs inherently recognize unsafe content in text, and the incorporation of visual inputs in VLMs frequently dilutes risk-related signals. Motivated by this, we propose Risk Awareness Injection (RAI), a lightweight and training-free framework for safety calibration that restores LLM-like risk recognition by amplifying unsafe signals in VLMs. Specifically, RAI constructs an Unsafe Prototype Subspace from language embeddings and performs targeted modulation on selected high-risk visual tokens, explicitly activating safety-critical signals within the cross-modal feature space. This modulation restores the model's LLM-like ability to detect unsafe content from visual inputs, while preserving the semantic integrity of original tokens for cross-modal reasoning. Extensive experiments across multiple jailbreak and utility benchmarks demonstrate that RAI substantially reduces attack success rate without compromising task performance.
翻译:视觉语言模型(VLMs)将大型语言模型(LLMs)的推理能力扩展至跨模态场景,但其对多模态越狱攻击仍高度脆弱。现有防御方法主要依赖于安全微调或激进的令牌操作,导致高昂的训练成本或显著的实用性下降。近期研究表明,LLMs本质上能识别文本中的不安全内容,而VLMs中视觉输入的引入常常会稀释风险相关信号。受此启发,我们提出了风险感知注入(RAI),一种轻量级且无需训练的安全校准框架,通过放大VLMs中的不安全信号来恢复其类似LLM的风险识别能力。具体而言,RAI从语言嵌入中构建一个不安全原型子空间,并对选定的高风险视觉令牌进行定向调制,从而在跨模态特征空间内显式激活安全关键信号。这种调制恢复了模型从视觉输入中检测不安全内容的、类似LLM的能力,同时为跨模态推理保留了原始令牌的语义完整性。在多个越狱攻击和实用性基准测试上的广泛实验表明,RAI在显著降低攻击成功率的同时,并未损害任务性能。