We present Perm, a learned parametric model of human 3D hair designed to facilitate various hair-related applications. Unlike previous work that jointly models the global hair shape and local strand details, we propose to disentangle them using a PCA-based strand representation in the frequency domain, thereby allowing more precise editing and output control. Specifically, we leverage our strand representation to fit and decompose hair geometry textures into low- to high-frequency hair structures. These decomposed textures are later parameterized with different generative models, emulating common stages in the hair modeling process. We conduct extensive experiments to validate the architecture design of \textsc{Perm}, and finally deploy the trained model as a generic prior to solve task-agnostic problems, further showcasing its flexibility and superiority in tasks such as 3D hair parameterization, hairstyle interpolation, single-view hair reconstruction, and hair-conditioned image generation. Our code, data, and supplemental can be found at our project page: https://cs.yale.edu/homes/che/projects/perm/
翻译:本文提出Perm,一种学习得到的人体三维头发参数化模型,旨在促进各类头发相关应用的发展。与先前工作将全局发型与局部发丝细节联合建模不同,我们提出在频域中使用基于主成分分析的发丝表示来解耦这两者,从而实现更精确的编辑与输出控制。具体而言,我们利用所提出的发丝表示方法,将头发几何纹理拟合并分解为低频到高频的头发结构。这些分解后的纹理随后通过不同的生成模型进行参数化,以模拟头发建模流程中的常见阶段。我们通过大量实验验证了\textsc{Perm}的架构设计,最终将训练完成的模型部署为通用先验,以解决任务无关性问题,进一步展示了其在三维头发参数化、发型插值、单视图头发重建及头发条件图像生成等任务中的灵活性与优越性。相关代码、数据及补充材料详见项目页面:https://cs.yale.edu/homes/che/projects/perm/