This paper introduces a novel clothed human model that can be learned from multiview RGB videos, with a particular emphasis on recovering physically accurate body and cloth movements. Our method, Position Based Dynamic Gaussians (PBDyG), realizes ``movement-dependent'' cloth deformation via physical simulation, rather than merely relying on ``pose-dependent'' rigid transformations. We model the clothed human holistically but with two distinct physical entities in contact: clothing modeled as 3D Gaussians, which are attached to a skinned SMPL body that follows the movement of the person in the input videos. The articulation of the SMPL body also drives physically-based simulation of the clothes' Gaussians to transform the avatar to novel poses. In order to run position based dynamics simulation, physical properties including mass and material stiffness are estimated from the RGB videos through Dynamic 3D Gaussian Splatting. Experiments demonstrate that our method not only accurately reproduces appearance but also enables the reconstruction of avatars wearing highly deformable garments, such as skirts or coats, which have been challenging to reconstruct using existing methods.
翻译:本文提出了一种可从多视角RGB视频中学习的新型着装人体模型,其特别强调恢复物理精确的身体与衣物运动。我们的方法——基于位置的动态高斯模型(PBDyG),通过物理模拟实现“运动依赖”的衣物形变,而非仅仅依赖“姿态依赖”的刚性变换。我们对着装人体进行整体建模,但将其视为两个相互接触的独立物理实体:衣物被建模为3D高斯模型,这些模型附着在蒙皮的SMPL身体上,后者跟随输入视频中人物的运动。SMPL身体的关节运动进一步驱动基于物理的衣物高斯模型模拟,从而使化身能够变换到新姿态。为了运行基于位置的动力学模拟,我们通过动态3D高斯泼溅从RGB视频中估计出包括质量和材料刚度在内的物理属性。实验表明,我们的方法不仅能精确复现外观,还能重建穿着高度可变形服装(如裙子或外套)的化身,而这类服装的重建对现有方法而言一直具有挑战性。