We introduce RMAvatar, a novel human avatar representation with Gaussian splatting embedded on mesh to learn clothed avatar from a monocular video. We utilize the explicit mesh geometry to represent motion and shape of a virtual human and implicit appearance rendering with Gaussian Splatting. Our method consists of two main modules: Gaussian initialization module and Gaussian rectification module. We embed Gaussians into triangular faces and control their motion through the mesh, which ensures low-frequency motion and surface deformation of the avatar. Due to the limitations of LBS formula, the human skeleton is hard to control complex non-rigid transformations. We then design a pose-related Gaussian rectification module to learn fine-detailed non-rigid deformations, further improving the realism and expressiveness of the avatar. We conduct extensive experiments on public datasets, RMAvatar shows state-of-the-art performance on both rendering quality and quantitative evaluations. Please see our project page at https://rm-avatar.github.io.
翻译:我们提出RMAvatar,一种基于网格嵌入高斯溅射的新型人像表示方法,用于从单目视频中学习着装人像。我们利用显式网格几何表示虚拟人体的运动和形状,并采用高斯溅射进行隐式外观渲染。我们的方法包含两个主要模块:高斯初始化模块和高斯校正模块。我们将高斯模型嵌入三角面片,并通过网格控制其运动,从而确保人像的低频运动和表面形变。由于线性混合蒙皮公式的局限性,人体骨架难以控制复杂的非刚性变换。为此,我们设计了姿态相关的高斯校正模块来学习精细的非刚性形变,进一步提升人像的真实感与表现力。我们在公开数据集上进行了大量实验,RMAvatar在渲染质量和定量评估方面均展现出最先进的性能。详情请访问我们的项目页面:https://rm-avatar.github.io。