Contemporary makeup approaches primarily hinge on unpaired learning paradigms, yet they grapple with the challenges of inaccurate supervision (e.g., face misalignment) and sophisticated facial prompts (including face parsing, and landmark detection). These challenges prohibit low-cost deployment of facial makeup models, especially on mobile devices. To solve above problems, we propose a brand-new learning paradigm, termed "Data Amplify Learning (DAL)," alongside a compact makeup model named "TinyBeauty." The core idea of DAL lies in employing a Diffusion-based Data Amplifier (DDA) to "amplify" limited images for the model training, thereby enabling accurate pixel-to-pixel supervision with merely a handful of annotations. Two pivotal innovations in DDA facilitate the above training approach: (1) A Residual Diffusion Model (RDM) is designed to generate high-fidelity detail and circumvent the detail vanishing problem in the vanilla diffusion models; (2) A Fine-Grained Makeup Module (FGMM) is proposed to achieve precise makeup control and combination while retaining face identity. Coupled with DAL, TinyBeauty necessitates merely 80K parameters to achieve a state-of-the-art performance without intricate face prompts. Meanwhile, TinyBeauty achieves a remarkable inference speed of up to 460 fps on the iPhone 13. Extensive experiments show that DAL can produce highly competitive makeup models using only 5 image pairs.
翻译:当前的面部妆容方法主要依赖于非配对学习范式,然而它们面临着监督信息不准确(如面部未对齐)以及复杂的面部提示(包括面部解析和关键点检测)等挑战。这些挑战阻碍了面部妆容模型的低成本部署,特别是在移动设备上。为解决上述问题,我们提出了一种全新的学习范式,称为"数据增强学习(DAL)",并配套一个名为"TinyBeauty"的紧凑型妆容模型。DAL的核心思想在于利用基于扩散模型的数据增强器(DDA)来"增强"有限的训练图像,从而仅需少量标注即可实现精确的像素级监督。DDA中的两项关键创新支撑了上述训练方法:(1)设计了残差扩散模型(RDM)以生成高保真细节,并规避原始扩散模型中的细节消失问题;(2)提出了细粒度妆容模块(FGMM),在保持面部身份的同时实现精确的妆容控制与组合。结合DAL,TinyBeauty仅需80K参数即可达到最先进的性能,且无需复杂的面部提示。同时,TinyBeauty在iPhone 13上实现了高达460 fps的显著推理速度。大量实验表明,DAL仅使用5对图像即可训练出极具竞争力的妆容模型。