This paper presents UniPortrait, an innovative human image personalization framework that unifies single- and multi-ID customization with high face fidelity, extensive facial editability, free-form input description, and diverse layout generation. UniPortrait consists of only two plug-and-play modules: an ID embedding module and an ID routing module. The ID embedding module extracts versatile editable facial features with a decoupling strategy for each ID and embeds them into the context space of diffusion models. The ID routing module then combines and distributes these embeddings adaptively to their respective regions within the synthesized image, achieving the customization of single and multiple IDs. With a carefully designed two-stage training scheme, UniPortrait achieves superior performance in both single- and multi-ID customization. Quantitative and qualitative experiments demonstrate the advantages of our method over existing approaches as well as its good scalability, e.g., the universal compatibility with existing generative control tools. The project page is at https://aigcdesigngroup.github.io/UniPortrait-Page/ .
翻译:本文提出了UniPortrait,一个创新的人像图像个性化框架,它统一了单人与多ID定制,具备高面部保真度、广泛的面部可编辑性、自由形式的输入描述以及多样化的布局生成能力。UniPortrait仅由两个即插即用模块构成:一个ID嵌入模块和一个ID路由模块。ID嵌入模块通过解耦策略为每个ID提取通用且可编辑的面部特征,并将其嵌入到扩散模型的上下文空间中。随后,ID路由模块自适应地组合这些嵌入并将其分配到合成图像中的各自区域,从而实现单个及多个ID的定制。通过精心设计的两阶段训练方案,UniPortrait在单ID与多ID定制任务上均取得了卓越的性能。定量与定性实验证明了我们的方法相对于现有方法的优势及其良好的可扩展性,例如与现有生成控制工具的普遍兼容性。项目页面位于 https://aigcdesigngroup.github.io/UniPortrait-Page/ 。