Current image generation models can effortlessly produce high-quality, highly realistic images, but this also increases the risk of misuse. In various Text-to-Image or Image-to-Image tasks, attackers can generate a series of images containing inappropriate content by simply editing the language modality input. Currently, to prevent this security threat, the various guard or defense methods that are proposed also focus on defending the language modality. However, in practical applications, threats in the visual modality, particularly in tasks involving the editing of real-world images, pose greater security risks as they can easily infringe upon the rights of the image owner. Therefore, this paper uses a method named typographic attack to reveal that various image generation models also commonly face threats in the vision modality. Furthermore, we also evaluate the defense performance of various existing methods when facing threats in the vision modality and uncover their ineffectiveness. Finally, we propose the Vision Modal Threats in Image Generation Models (VMT-IGMs) dataset, which would serve as a baseline for evaluating the vision modality vulnerability of various image generation models.
翻译:当前图像生成模型能够毫不费力地生成高质量、高度逼真的图像,但这同时也增加了被滥用的风险。在各种文本到图像或图像到图像任务中,攻击者仅需编辑语言模态输入即可生成一系列包含不当内容的图像。目前,为防止此类安全威胁,所提出的各种防护或防御方法也主要聚焦于防御语言模态。然而,在实际应用中,视觉模态中的威胁,尤其是在涉及真实世界图像编辑的任务中,因其可能轻易侵犯图像所有者的权利,构成了更大的安全风险。因此,本文采用一种名为排版攻击的方法,揭示各类图像生成模型同样普遍面临视觉模态的威胁。此外,我们还评估了现有各种方法在面对视觉模态威胁时的防御性能,并揭示了其无效性。最后,我们提出了图像生成模型视觉模态威胁数据集,该数据集将作为评估各类图像生成模型视觉模态脆弱性的基准。