In this work, we introduce two types of makeup prior models to extend existing 3D face prior models: PCA-based and StyleGAN2-based priors. The PCA-based prior model is a linear model that is easy to construct and is computationally efficient. However, it retains only low-frequency information. Conversely, the StyleGAN2-based model can represent high-frequency information with relatively higher computational cost than the PCA-based model. Although there is a trade-off between the two models, both are applicable to 3D facial makeup estimation and related applications. By leveraging makeup prior models and designing a makeup consistency module, we effectively address the challenges that previous methods faced in robustly estimating makeup, particularly in the context of handling self-occluded faces. In experiments, we demonstrate that our approach reduces computational costs by several orders of magnitude, achieving speeds up to 180 times faster. In addition, by improving the accuracy of the estimated makeup, we confirm that our methods are highly advantageous for various 3D facial makeup applications such as 3D makeup face reconstruction, user-friendly makeup editing, makeup transfer, and interpolation.
翻译:本文介绍两种用于扩展现有三维面部先验模型的化妆先验模型:基于PCA的先验模型和基于StyleGAN2的先验模型。基于PCA的先验模型是一种线性模型,易于构建且计算效率高,但仅能保留低频信息。相比之下,基于StyleGAN2的模型可表征高频信息,但计算成本相对较高。尽管两种模型存在权衡关系,但均可适用于三维面部化妆估计及相关应用。通过利用化妆先验模型并设计化妆一致性模块,我们有效解决了先前方法在鲁棒估计化妆时面临的挑战,特别是针对面部自遮挡场景的处理。实验表明,我们的方法可将计算成本降低数个数量级,实现高达180倍的加速。此外,通过提升化妆估计的准确性,我们证实该方法在三维面部化妆重建、用户友好型化妆编辑、化妆迁移及插值等各类三维面部化妆应用中具有显著优势。