By equipping the most recent 3D Gaussian Splatting representation with head 3D morphable models (3DMM), existing methods manage to create head avatars with high fidelity. However, most existing methods only reconstruct a head without the body, substantially limiting their application scenarios. We found that naively applying Gaussians to model the clothed chest and shoulders tends to result in blurry reconstruction and noisy floaters under novel poses. This is because of the fundamental limitation of Gaussians and point clouds -- each Gaussian or point can only have a single directional radiance without spatial variance, therefore an unnecessarily large number of them is required to represent complicated spatially varying texture, even for simple geometry. In contrast, we propose to model the body part with a neural texture that consists of coarse and pose-dependent fine colors. To properly render the body texture for each view and pose without accurate geometry nor UV mapping, we optimize another sparse set of Gaussians as anchors that constrain the neural warping field that maps image plane coordinates to the texture space. We demonstrate that Gaussian Head & Shoulders can fit the high-frequency details on the clothed upper body with high fidelity and potentially improve the accuracy and fidelity of the head region. We evaluate our method with casual phone-captured and internet videos and show our method archives superior reconstruction quality and robustness in both self and cross reenactment tasks. To fully utilize the efficient rendering speed of Gaussian splatting, we additionally propose an accelerated inference method of our trained model without Multi-Layer Perceptron (MLP) queries and reach a stable rendering speed of around 130 FPS for any subjects.
翻译:通过将最新的三维高斯泼溅表示与头部三维可变形模型(3DMM)相结合,现有方法能够创建高保真度的头部虚拟人。然而,大多数现有方法仅重建头部而不包含身体,这极大限制了其应用场景。我们发现,直接使用高斯模型对穿着衣物的胸部和肩部进行建模,往往会导致新姿态下的模糊重建和噪声飞点。这是因为高斯模型与点云存在根本性限制——每个高斯或点只能拥有单一方向辐射度,不具备空间变化性,因此需要大量高斯才能表征复杂空间变化纹理(即使几何形状简单)。针对这一问题,我们提出采用神经纹理对躯干部分进行建模,该纹理由粗粒度颜色和姿态相关细粒度颜色组成。为了在没有精确几何或UV映射的情况下为每个视角和姿态正确渲染身体纹理,我们优化了一组稀疏高斯作为锚点,约束将图像平面坐标映射到纹理空间的神经扭曲场。实验证明,高斯头肩模型能够以高保真度拟合带衣上体高频细节,并有望提升头部区域的精度与保真度。我们使用手机拍摄的日常视频和网络视频评估了该方法,结果表明其在自身重演和交叉重演任务中均具有卓越的重建质量与鲁棒性。为充分利用高斯泼溅的高效渲染速度,我们进一步提出无需多层感知机(MLP)查询的加速推理方法,使任意目标的渲染速度稳定达到约130 FPS。