The rapid evolution of diffusion models has democratized face swapping but also raises concerns about privacy and identity security. Existing proactive defenses, often adapted from image editing attacks, prove ineffective in this context. We attribute this failure to an oversight of the structural resilience and the unique static conditional guidance mechanism inherent in face swapping systems. To address this, we propose VoidFace, a systemic defense method that views face swapping as a coupled identity pathway. By injecting perturbations at critical bottlenecks, VoidFace induces cascading disruption throughout the pipeline. Specifically, we first introduce localization disruption and identity erasure to degrade physical regression and semantic embeddings, thereby impairing the accurate modeling of the source face. We then intervene in the generative domain by decoupling attention mechanisms to sever identity injection, and corrupting intermediate diffusion features to prevent the reconstruction of source identity. To ensure visual imperceptibility, we perform adversarial search in the latent manifold, guided by a perceptual adaptive strategy to balance attack potency with image quality. Extensive experiments show that VoidFace outperforms existing defenses across various diffusion-based swapping models, while producing adversarial faces with superior visual quality.
翻译:扩散模型的快速发展使得人脸交换技术日益普及,同时也引发了隐私与身份安全方面的担忧。现有的主动防御方法多从图像编辑攻击中迁移而来,在此场景下被证明效果有限。我们认为这种失效源于未能充分考虑换脸系统固有的结构鲁棒性及其独特的静态条件引导机制。为此,我们提出VoidFace——一种将人脸交换视为耦合身份通路的系统性防御方法。该方法通过在关键瓶颈处注入扰动,引发整个处理流程的级联破坏。具体而言,我们首先引入局部化破坏与身份擦除操作,以降低物理回归与语义嵌入的质量,从而削弱对源人脸准确建模的能力。随后,我们在生成域实施干预:通过解耦注意力机制以切断身份注入路径,并破坏中间扩散特征以阻止源身份的重建。为确保视觉不可感知性,我们在潜在流形中进行对抗性搜索,并采用感知自适应策略引导搜索过程,以平衡攻击效能与图像质量。大量实验表明,VoidFace在多种基于扩散的换脸模型中均优于现有防御方法,同时能生成具有更优视觉质量的对抗人脸。