Vision-language models (VLMs) have improved significantly in multi-modal tasks, but their more complex architecture makes their safety alignment more challenging than the alignment of large language models (LLMs). In this paper, we reveal an unfair distribution of safety across the layers of VLM's vision encoder, with earlier and middle layers being disproportionately vulnerable to malicious inputs compared to the more robust final layers. This 'cross-layer' vulnerability stems from the model's inability to generalize its safety training from the default architectural settings used during training to unseen or out-of-distribution scenarios, leaving certain layers exposed. We conduct a comprehensive analysis by projecting activations from various intermediate layers and demonstrate that these layers are more likely to generate harmful outputs when exposed to malicious inputs. Our experiments with LLaVA-1.5 and Llama 3.2 show discrepancies in attack success rates and toxicity scores across layers, indicating that current safety alignment strategies focused on a single default layer are insufficient.
翻译:视觉语言模型(VLM)在多模态任务中取得了显著进展,但其更复杂的架构使其安全对齐比大型语言模型(LLM)更具挑战性。本文揭示了VLM视觉编码器各层间安全性的不均衡分布:与更鲁棒的最终层相比,早期和中间层在面对恶意输入时表现出不成比例的脆弱性。这种“跨层”脆弱性源于模型无法将其安全训练从训练期间使用的默认架构设置泛化到未见或分布外场景,导致某些层暴露于风险之中。我们通过投影各中间层的激活值进行了全面分析,证明这些层在接触恶意输入时更可能生成有害输出。基于LLaVA-1.5和Llama 3.2的实验显示,不同层的攻击成功率与毒性分数存在显著差异,表明当前聚焦于单一默认层的安全对齐策略存在不足。