In this work, we focus on exploring explicit fine-grained control of generative facial image editing, all while generating faithful facial appearances and consistent semantic details, which however, is quite challenging and has not been extensively explored, especially under an one-shot scenario. We identify the key challenge as the exploration of disentangled conditional control between high-level semantics and explicit parameters (e.g., 3DMM) in the generation process, and accordingly propose a novel diffusion-based editing framework, named DisControlFace. Specifically, we leverage a Diffusion Autoencoder (Diff-AE) as the semantic reconstruction backbone. To enable explicit face editing, we construct an Exp-FaceNet that is compatible with Diff-AE to generate spatial-wise explicit control conditions based on estimated 3DMM parameters. Different from current diffusion-based editing methods that train the whole conditional generative model from scratch, we freeze the pre-trained weights of the Diff-AE to maintain its semantically deterministic conditioning capability and accordingly propose a random semantic masking (RSM) strategy to effectively achieve an independent training of Exp-FaceNet. This setting endows the model with disentangled face control meanwhile reducing semantic information shift in editing. Our model can be trained using 2D in-the-wild portrait images without requiring 3D or video data and perform robust editing on any new facial image through a simple one-shot fine-tuning. Comprehensive experiments demonstrate that DisControlFace can generate realistic facial images with better editing accuracy and identity preservation over state-of-the-art methods. Project page: https://discontrolface.github.io/
翻译:在本研究中,我们致力于探索生成式人脸图像编辑中显式的细粒度控制,同时生成真实的面部外观与一致的语义细节,这一任务极具挑战性且尚未被广泛探索,尤其在单次学习场景下。我们认为核心挑战在于生成过程中高层语义与显式参数(如3DMM)之间解耦条件控制的探索,并据此提出一种新颖的基于扩散的编辑框架——DisControlFace。具体而言,我们采用扩散自编码器(Diff-AE)作为语义重建主干网络。为实现显式人脸编辑,我们构建了与Diff-AE兼容的Exp-FaceNet,基于估计的3DMM参数生成空间显式控制条件。不同于当前基于扩散的编辑方法需从头训练整个条件生成模型,我们冻结了预训练Diff-AE的权重以保持其语义确定性条件能力,并相应提出随机语义掩码(RSM)策略,以实现Exp-FaceNet的有效独立训练。该设置使模型在实现解耦人脸控制的同时,减少了编辑过程中的语义信息偏移。我们的模型可使用二维野外肖像图像进行训练,无需三维或视频数据,并能通过简单的单次微调对任意新人脸图像进行鲁棒编辑。综合实验表明,DisControlFace能生成更逼真的人脸图像,在编辑精度与身份保持方面均优于现有先进方法。项目页面:https://discontrolface.github.io/