The growing use of portrait images in computer vision highlights the need to protect personal identities. At the same time, anonymized images must remain useful for downstream computer vision tasks. In this work, we propose a unified framework that leverages the inpainting ability of latent diffusion models to generate realistic anonymized images. Unlike prior approaches, we have complete control over the anonymization process by designing an adaptive attribute-guidance module that applies gradient correction during the reverse denoising process, aligning the facial attributes of the generated image with those of the synthesized target image. Our framework also supports localized anonymization, allowing users to specify which facial regions are left unchanged. Extensive experiments conducted on the public CelebA-HQ and FFHQ datasets show that our method outperforms state-of-the-art approaches while requiring no additional model training. The source code is available on our page.
翻译:随着肖像图像在计算机视觉领域的日益广泛应用,保护个人身份信息的需求愈发凸显。与此同时,匿名化后的图像仍需保持对下游计算机视觉任务的有效性。本研究提出一个统一框架,利用潜在扩散模型的修复能力生成逼真的匿名化图像。与现有方法不同,我们通过设计自适应属性引导模块,在反向去噪过程中施加梯度校正,使生成图像的面部属性与合成目标图像对齐,从而实现对匿名化过程的完全控制。该框架还支持局部匿名化,允许用户指定保留哪些面部区域不变。在公开数据集CelebA-HQ和FFHQ上进行的大量实验表明,本方法在无需额外模型训练的情况下,性能优于现有最优方法。源代码已在项目页面公开。