Current face de-identification methods that replace identifiable cues in the face region with other sacrifices utilities contributing to realism, such as age and gender. To retrieve the damaged realism, we present FLUID (Face de-identification in the Latent space via Utility-preserving Identity Displacement), a single-input face de-identification framework that directly replaces identity features in the latent space of a pretrained diffusion model without affecting the model's weights. We reinterpret face de-identification as an image editing task in the latent h-space of a pretrained unconditional diffusion model. Our framework estimates identity-editing directions through optimization guided by loss functions that encourage attribute preservation while suppressing identity signals. We further introduce both linear and geodesic (tangent-based) editing schemes to effectively navigate the latent manifold. Experiments on CelebA-HQ and FFHQ show that FLUID achieves a superior balance between identity suppression and attribute preservation, outperforming existing de-identification approaches in both qualitative and quantitative evaluations.
翻译:当前的人脸去标识化方法通过替换面部区域的可识别线索来实现去标识,但往往牺牲了诸如年龄、性别等有助于真实感的效用属性。为恢复受损的真实感,本文提出FLUID(基于潜在空间效用保持身份置换的人脸去标识),这是一种单输入人脸去标识框架,可直接在预训练扩散模型的潜在空间中替换身份特征,而无需修改模型权重。我们将人脸去标识重新定义为预训练无条件扩散模型的潜在h空间中的图像编辑任务。该框架通过优化过程估计身份编辑方向,其损失函数设计旨在保持属性特征的同时抑制身份信号。我们进一步引入线性和测地线(基于切空间)两种编辑方案,以有效遍历潜在流形。在CelebA-HQ和FFHQ数据集上的实验表明,FLUID在身份抑制与属性保持之间取得了更优的平衡,在定性与定量评估中均优于现有去标识方法。