Modeling animatable human avatars from RGB videos is a long-standing and challenging problem. Recent works usually adopt MLP-based neural radiance fields (NeRF) to represent 3D humans, but it remains difficult for pure MLPs to regress pose-dependent garment details. To this end, we introduce Animatable Gaussians, a new avatar representation that leverages powerful 2D CNNs and 3D Gaussian splatting to create high-fidelity avatars. To associate 3D Gaussians with the animatable avatar, we learn a parametric template from the input videos, and then parameterize the template on two front \& back canonical Gaussian maps where each pixel represents a 3D Gaussian. The learned template is adaptive to the wearing garments for modeling looser clothes like dresses. Such template-guided 2D parameterization enables us to employ a powerful StyleGAN-based CNN to learn the pose-dependent Gaussian maps for modeling detailed dynamic appearances. Furthermore, we introduce a pose projection strategy for better generalization given novel poses. Overall, our method can create lifelike avatars with dynamic, realistic and generalized appearances. Experiments show that our method outperforms other state-of-the-art approaches. Code: https://github.com/lizhe00/AnimatableGaussians
翻译:从RGB视频中建模可动人体化身是一个长期存在的挑战性问题。当前方法通常采用基于MLP的神经辐射场(NeRF)来表示三维人体,但纯MLP网络难以准确回归姿态依赖的服装细节。为此,我们提出可动高斯(Animatable Gaussians)——一种利用强大二维CNN与三维高斯泼溅技术构建高保真化身的新型表示方法。为将三维高斯与可动化身关联,我们从输入视频中学习参数化模板,并将其参数化为前后两个标准高斯图,其中每个像素代表一个三维高斯。该模板具有自适应服装建模能力,可处理连衣裙等宽松衣物。这种模板引导的二维参数化使我们能够采用基于StyleGAN的CNN网络学习姿态依赖的高斯图,从而建模精细的动态外观。此外,我们提出姿态投影策略以提升对新姿态的泛化能力。实验表明,本方法在动态真实感与外观泛化性方面均优于现有最优方案。代码:https://github.com/lizhe00/AnimatableGaussians