Recent approaches employing imperceptible perturbations in input images have demonstrated promising potential to counter malicious manipulations in diffusion-based image editing systems. However, existing methods suffer from limited transferability in cross-model evaluations. To address this, we propose Transferable Defense Against Malicious Image Edits (TDAE), a novel bimodal framework that enhances image immunity against malicious edits through coordinated image-text optimization. Specifically, at the visual defense level, we introduce FlatGrad Defense Mechanism (FDM), which incorporates gradient regularization into the adversarial objective. By explicitly steering the perturbations toward flat minima, FDM amplifies immune robustness against unseen editing models. For textual enhancement protection, we propose an adversarial optimization paradigm named Dynamic Prompt Defense (DPD), which periodically refines text embeddings to align the editing outcomes of immunized images with those of the original images, then updates the images under optimized embeddings. Through iterative adversarial updates to diverse embeddings, DPD enforces the generation of immunized images that seek a broader set of immunity-enhancing features, thereby achieving cross-model transferability. Extensive experimental results demonstrate that our TDAE achieves state-of-the-art performance in mitigating malicious edits under both intra- and cross-model evaluations.
翻译:近期利用输入图像中不可察觉扰动的方法,在对抗基于扩散模型的图像编辑系统中的恶意操纵方面展现出良好潜力。然而,现有方法在跨模型评估中表现出有限的可迁移性。为解决此问题,我们提出可迁移的恶意图像编辑防御框架,这是一种新颖的双模态框架,通过协调的图像-文本优化来增强图像对恶意编辑的免疫能力。具体而言,在视觉防御层面,我们引入平坦梯度防御机制,该机制将梯度正则化纳入对抗目标。通过显式地将扰动引导至平坦最小值,FDM增强了对未见编辑模型的免疫鲁棒性。在文本增强保护方面,我们提出一种名为动态提示防御的对抗优化范式,该范式周期性地优化文本嵌入,使免疫化图像的编辑结果与原始图像的编辑结果对齐,随后在优化后的嵌入下更新图像。通过对多样化嵌入进行迭代对抗更新,DPD强制生成寻求更广泛免疫增强特征的免疫化图像,从而实现跨模型可迁移性。大量实验结果表明,我们的TDAE在模型内和跨模型评估中均实现了最先进的恶意编辑缓解性能。