Multi-Modal Language Models (MLLMs) have transformed artificial intelligence by combining visual and text data, making applications like image captioning, visual question answering, and multi-modal content creation possible. This ability to understand and work with complex information has made MLLMs useful in areas such as healthcare, autonomous systems, and digital content. However, integrating multiple types of data also creates security risks. Attackers can manipulate either the visual or text inputs, or both, to make the model produce unintended or even harmful responses. This paper reviews how visual inputs in MLLMs can be exploited by various attack strategies. We break down these attacks into categories: simple visual tweaks and cross-modal manipulations, as well as advanced strategies like VLATTACK, HADES, and Collaborative Multimodal Adversarial Attack (Co-Attack). These attacks can mislead even the most robust models while looking nearly identical to the original visuals, making them hard to detect. We also discuss the broader security risks, including threats to privacy and safety in important applications. To counter these risks, we review current defense methods like the SmoothVLM framework, pixel-wise randomization, and MirrorCheck, looking at their strengths and limitations. We also discuss new methods to make MLLMs more secure, including adaptive defenses, better evaluation tools, and security approaches that protect both visual and text data. By bringing together recent developments and identifying key areas for improvement, this review aims to support the creation of more secure and reliable multi-modal AI systems for real-world use.
翻译:多模态语言模型通过融合视觉与文本数据,推动了人工智能领域的变革,使图像描述、视觉问答及多模态内容生成等应用成为可能。这种处理复杂信息的能力使其在医疗健康、自主系统与数字内容等领域展现出重要价值。然而,多模态融合也带来了新的安全风险。攻击者可通过操纵视觉或文本输入(或两者兼施),诱导模型产生非预期甚至有害的输出。本文系统综述了针对多模态语言模型视觉输入的攻击策略,将其归纳为两类:基础视觉篡改与跨模态操纵,以及包括VLATTACK、HADES与协同多模态对抗攻击在内的高级攻击技术。这些攻击能在视觉感知层面保持高度隐蔽性的同时,误导甚至最鲁棒的模型,从而难以被检测。本文进一步探讨了由此引发的广义安全风险,包括关键应用场景中的隐私与安全隐患。为应对这些威胁,我们综述了当前防御机制(如SmoothVLM框架、像素随机化与MirrorCheck等)的优势与局限,并探讨了增强多模态语言模型安全性的新兴方向,包括自适应防御机制、改进的评估工具以及兼顾视觉与文本数据的协同防护方案。通过整合最新研究进展并指明关键改进领域,本综述旨在为构建更安全可靠、适用于现实场景的多模态人工智能系统提供参考。