We present Perm, a learned parametric representation of human 3D hair designed to facilitate various hair-related applications. Unlike previous work that jointly models the global hair structure and local curl patterns, we propose to disentangle them using a PCA-based strand representation in the frequency domain, thereby allowing more precise editing and output control. Specifically, we leverage our strand representation to fit and decompose hair geometry textures into low- to high-frequency hair structures, termed guide textures and residual textures, respectively. These decomposed textures are later parameterized with different generative models, emulating common stages in the hair grooming process. We conduct extensive experiments to validate the architecture design of Perm, and finally deploy the trained model as a generic prior to solve task-agnostic problems, further showcasing its flexibility and superiority in tasks such as single-view hair reconstruction, hairstyle editing, and hair-conditioned image generation. More details can be found on our project page: https://cs.yale.edu/homes/che/projects/perm/.
翻译:本文提出Perm,一种学习得到的人体三维头发参数化表示,旨在促进各类头发相关应用。与先前工作联合建模全局头发结构和局部卷曲模式不同,我们提出在频域中使用基于主成分分析的发丝表示来解耦二者,从而实现更精确的编辑与输出控制。具体而言,我们利用所提出的发丝表示将头发几何纹理拟合并分解为低频至高频的头发结构,分别称为引导纹理与残差纹理。这些分解后的纹理随后通过不同的生成模型进行参数化,以模拟头发造型流程中的常见阶段。我们通过大量实验验证Perm的架构设计,最终将训练完成的模型部署为通用先验以解决任务无关问题,进一步展示了其在单视角头发重建、发型编辑及头发条件图像生成等任务中的灵活性与优越性。更多细节请访问项目页面:https://cs.yale.edu/homes/che/projects/perm/。