Makeup transfer is a process of transferring the makeup style from a reference image to the source images, while preserving the source images' identities. This technique is highly desirable and finds many applications. However, existing methods lack fine-level control of the makeup style, making it challenging to achieve high-quality results when dealing with large spatial misalignments. To address this problem, we propose a novel Spatial Alignment and Region-Adaptive normalization method (SARA) in this paper. Our method generates detailed makeup transfer results that can handle large spatial misalignments and achieve part-specific and shade-controllable makeup transfer. Specifically, SARA comprises three modules: Firstly, a spatial alignment module that preserves the spatial context of makeup and provides a target semantic map for guiding the shape-independent style codes. Secondly, a region-adaptive normalization module that decouples shape and makeup style using per-region encoding and normalization, which facilitates the elimination of spatial misalignments. Lastly, a makeup fusion module blends identity features and makeup style by injecting learned scale and bias parameters. Experimental results show that our SARA method outperforms existing methods and achieves state-of-the-art performance on two public datasets.
翻译:化妆迁移是将参考图像的化妆风格迁移至源图像,同时保留源图像身份信息的过程。该技术具有广泛应用价值,但现有方法缺乏对化妆风格的精细控制,难以在处理较大空间错位时获得高质量结果。为此,本文提出一种新颖的空间对齐与区域自适应归一化方法(SARA)。该方法能生成精细的化妆迁移结果,可处理大幅空间错位,并实现部位特定及色调可控的化妆迁移。具体而言,SARA包含三个模块:首先,空间对齐模块保留化妆的空间上下文,为目标语义图提供指导以实现形状无关的风格编码;其次,区域自适应归一化模块通过逐区域编码与归一化解耦形状与化妆风格,有助于消除空间错位;最后,化妆融合模块通过注入学习到的缩放与偏置参数,融合身份特征与化妆风格。实验结果表明,SARA方法在两个公开数据集上均超越现有方法,达到最优性能。