Current advances in human head modeling allow the generation of plausible-looking 3D head models via neural representations, such as NeRFs and SDFs. Nevertheless, constructing complete high-fidelity head models with explicitly controlled animation remains an issue. Furthermore, completing the head geometry based on a partial observation, e.g., coming from a depth sensor, while preserving a high level of detail is often problematic for the existing methods. We introduce a generative model for detailed 3D head meshes on top of an articulated 3DMM, simultaneously allowing explicit animation and high-detail preservation. Our method is trained in two stages. First, we register a parametric head model with vertex displacements to each mesh of the recently introduced NPHM dataset of accurate 3D head scans. The estimated displacements are baked into a hand-crafted UV layout. Second, we train a StyleGAN model to generalize over the UV maps of displacements, which we later refer to as HeadCraft. The decomposition of the parametric model and high-quality vertex displacements allows us to animate the model and modify the regions semantically. We demonstrate the results of unconditional sampling, fitting to a scan and editing. The project page is available at https://seva100.github.io/headcraft.
翻译:当前人体头部建模领域的最新进展使得通过神经表示(如NeRF和SDF)生成具有视觉真实感的三维头部模型成为可能。然而,构建具有显式可控动画特性且完整的高保真头部模型仍面临挑战。此外,基于部分观测数据(例如来自深度传感器)补全头部几何结构,同时保持高细节水平,对现有方法而言往往存在困难。我们在可驱动的三维形变模型基础上提出了一种高细节三维头部网格生成模型,该模型能同时实现显式动画控制与高细节保持。我们的方法采用两阶段训练策略:首先,我们将带顶点位移的参数化头部模型配准至近期发布的NPHM高精度三维头部扫描数据集中的每个网格,并将估计的位移信息烘焙至手工制作的UV布局中;其次,我们训练StyleGAN模型以泛化位移UV图(后文统称为HeadCraft)。参数化模型与高质量顶点位移的分解设计使得模型既能实现动画驱动,又能对特定区域进行语义化编辑。我们展示了无条件采样、扫描数据拟合及编辑功能的实验结果。项目页面详见https://seva100.github.io/headcraft。