We concentrate on a novel human-centric image synthesis task, that is, given only one reference facial photograph, it is expected to generate specific individual images with diverse head positions, poses, facial expressions, and illuminations in different contexts. To accomplish this goal, we argue that our generative model should be capable of the following favorable characteristics: (1) a strong visual and semantic understanding of our world and human society for basic object and human image generation. (2) generalizable identity preservation ability. (3) flexible and fine-grained head control. Recently, large pre-trained text-to-image diffusion models have shown remarkable results, serving as a powerful generative foundation. As a basis, we aim to unleash the above two capabilities of the pre-trained model. In this work, we present a new framework named CapHuman. We embrace the "encode then learn to align" paradigm, which enables generalizable identity preservation for new individuals without cumbersome tuning at inference. CapHuman encodes identity features and then learns to align them into the latent space. Moreover, we introduce the 3D facial prior to equip our model with control over the human head in a flexible and 3D-consistent manner. Extensive qualitative and quantitative analyses demonstrate our CapHuman can produce well-identity-preserved, photo-realistic, and high-fidelity portraits with content-rich representations and various head renditions, superior to established baselines. Code and checkpoint will be released at https://github.com/VamosC/CapHuman.
翻译:摘要:我们聚焦于一项新颖的以人为中心的图像合成任务,即仅给定一张参考面部照片,期望生成特定个体在不同场景下具有多样头部位置、姿态、面部表情和光照的图像。为实现这一目标,我们认为生成模型应具备以下理想特性:(1)对世界和人类社会具备强大的视觉与语义理解能力,以支持基本物体和人物图像的生成;(2)可泛化的身份保持能力;(3)灵活且细粒度的头部控制能力。近年来,大规模预训练文本到图像扩散模型已展现出卓越成果,成为强大的生成基础。在此基础上,我们旨在释放预训练模型的上述两种能力。本文提出名为CapHuman的新框架。我们采用“编码后学习对齐”范式,在不增加推理调优负担的前提下,实现对不同个体的可泛化身份保持。CapHuman首先编码身份特征,随后学习将其对齐至潜在空间。此外,我们引入3D面部先验,使模型能够以灵活且3D一致的方式控制头部姿态。广泛的定性与定量分析表明,与现有基线方法相比,CapHuman能够生成身份保持良好、照片级真实且高保真的肖像,并呈现丰富的内容表征与多样的头部渲染效果。代码与模型权重将在https://github.com/VamosC/CapHuman 公开。