Facial appearance editing is crucial for digital avatars, AR/VR, and personalized content creation, driving realistic user experiences. However, preserving identity with generative models is challenging, especially in scenarios with limited data availability. Traditional methods often require multiple images and still struggle with unnatural face shifts, inconsistent hair alignment, or excessive smoothing effects. To overcome these challenges, we introduce a novel diffusion-based framework, InstaFace, to generate realistic images while preserving identity using only a single image. Central to InstaFace, we introduce an efficient guidance network that harnesses 3D perspectives by integrating multiple 3DMM-based conditionals without introducing additional trainable parameters. Moreover, to ensure maximum identity retention as well as preservation of background, hair, and other contextual features like accessories, we introduce a novel module that utilizes feature embeddings from a facial recognition model and a pre-trained vision-language model. Quantitative evaluations demonstrate that our method outperforms several state-of-the-art approaches in terms of identity preservation, photorealism, and effective control of pose, expression, and lighting.
翻译:面部外观编辑对于数字化身、增强现实/虚拟现实以及个性化内容创作至关重要,是实现逼真用户体验的关键。然而,利用生成模型在保持身份一致性方面仍面临挑战,尤其是在数据可用性有限的场景中。传统方法通常需要多张图像,且仍难以避免不自然的面部形变、头发对齐不一致或过度平滑效应等问题。为克服这些挑战,我们提出了一种新颖的基于扩散的框架——InstaFace,该框架仅使用单张图像即可在保持身份的同时生成逼真图像。InstaFace的核心在于引入了一种高效的引导网络,该网络通过集成多个基于3DMM的条件约束来利用三维视角信息,且无需引入额外的可训练参数。此外,为确保最大程度地保持身份特征,并保留背景、头发及其他上下文特征(如配饰),我们引入了一个新颖的模块,该模块利用来自面部识别模型和预训练视觉-语言模型的特征嵌入。定量评估表明,我们的方法在身份保持、照片真实感以及对姿态、表情和光照的有效控制方面均优于多种现有先进方法。