Modeling animatable human avatars from RGB videos is a long-standing and challenging problem. Recent works usually adopt MLP-based neural radiance fields (NeRF) to represent 3D humans, but it remains difficult for pure MLPs to regress pose-dependent garment details. To this end, we introduce Animatable Gaussians, a new avatar representation that leverages powerful 2D CNNs and 3D Gaussian splatting to create high-fidelity avatars. To associate 3D Gaussians with the animatable avatar, we learn a parametric template from the input videos, and then parameterize the template on two front & back canonical Gaussian maps where each pixel represents a 3D Gaussian. The learned template is adaptive to the wearing garments for modeling looser clothes like dresses. Such template-guided 2D parameterization enables us to employ a powerful StyleGAN-based CNN to learn the pose-dependent Gaussian maps for modeling detailed dynamic appearances. Furthermore, we introduce a pose projection strategy for better generalization given novel poses. To tackle the realistic relighting of animatable avatars, we introduce physically-based rendering into the avatar representation for decomposing avatar materials and environment illumination. Overall, our method can create lifelike avatars with dynamic, realistic, generalized and relightable appearances. Experiments show that our method outperforms other state-of-the-art approaches.
翻译:从RGB视频中建模可动画化的人体化身是一个长期存在且具有挑战性的问题。现有工作通常采用基于多层感知机(MLP)的神经辐射场(NeRF)来表示三维人体,但纯MLP网络难以回归姿态依赖的服装细节。为此,我们提出了可动画化高斯模型,这是一种新的化身表示方法,它利用强大的二维卷积神经网络(CNN)和三维高斯泼溅技术来创建高保真化身。为了将三维高斯与可动画化身关联起来,我们从输入视频中学习一个参数化模板,然后将该模板参数化到两个前向与后向规范高斯映射上,其中每个像素代表一个三维高斯。所学习的模板能自适应所穿着的服装,以模拟如连衣裙等较宽松的衣物。这种模板引导的二维参数化使我们能够采用一个基于StyleGAN的强大CNN来学习姿态依赖的高斯映射,从而建模精细的动态外观。此外,我们引入了一种姿态投影策略,以在面对新姿态时获得更好的泛化能力。为了解决可动画化化身的真实感重光照问题,我们在化身表示中引入了基于物理的渲染,以分解化身材质和环境光照。总体而言,我们的方法能够创建具有动态、真实感、可泛化且可重光照外观的逼真化身。实验表明,我们的方法优于其他最先进的方法。