Semantic Image Synthesis (SIS) is among the most popular and effective techniques in the field of face generation and editing, thanks to its good generation quality and the versatility is brings along. Recent works attempted to go beyond the standard GAN-based framework, and started to explore Diffusion Models (DMs) for this task as these stand out with respect to GANs in terms of both quality and diversity. On the other hand, DMs lack in fine-grained controllability and reproducibility. To address that, in this paper we propose a SIS framework based on a novel Latent Diffusion Model architecture for human face generation and editing that is both able to reproduce and manipulate a real reference image and generate diversity-driven results. The proposed system utilizes both SPADE normalization and cross-attention layers to merge shape and style information and, by doing so, allows for a precise control over each of the semantic parts of the human face. This was not possible with previous methods in the state of the art. Finally, we performed an extensive set of experiments to prove that our model surpasses current state of the art, both qualitatively and quantitatively.
翻译:语义图像合成(Semantic Image Synthesis, SIS)是人脸生成与编辑领域最流行且有效的技术之一,这得益于其良好的生成质量与多功能性。近期研究尝试突破基于生成对抗网络(GAN)的标准框架,转而探索扩散模型(Diffusion Models, DMs)在该任务上的应用——这些模型在生成质量和多样性方面均优于GAN。然而,扩散模型在细粒度可控性与可重复性方面存在不足。为解决这一问题,本文提出一种基于新型潜在扩散模型(Latent Diffusion Model)架构的语义图像合成框架,用于人脸生成与编辑,该框架既能复现并操控真实参考图像,又可生成具有多样性的结果。所提系统融合SPADE归一化与交叉注意力层,实现了形状信息与风格信息的整合,从而能够精准控制人脸各语义部件。这是此前现有方法无法实现的。最后,我们开展大量实验,从定性与定量两方面证明本模型全面超越了当前最优方法。