We propose a novel framework for decomposing arbitrarily posed humans into animatable multi-layered 3D human avatars, separating the body and garments. Conventional single-layer reconstruction methods lock clothing to one identity, while prior multi-layer approaches struggle with occluded regions. We overcome both limitations by encoding each layer as a set of 2D Gaussians for accurate geometry and photorealistic rendering, and inpainting hidden regions with a pretrained 2D diffusion model via score-distillation sampling (SDS). Our three-stage training strategy first reconstructs the coarse canonical garment via single-layer reconstruction, followed by multi-layer training to jointly recover the inner-layer body and outer-layer garment details. Experiments on two 3D human benchmark datasets (4D-Dress, Thuman2.0) show that our approach achieves better rendering quality and layer decomposition and recomposition than the previous state-of-the-art, enabling realistic virtual try-on under novel viewpoints and poses, and advancing practical creation of high-fidelity 3D human assets for immersive applications. Our code is available at https://github.com/RockyXu66/LayerGS


翻译:本文提出了一种新颖的框架,用于将任意姿态的人体分解为可动画化的多层三维人体化身,实现身体与衣物的分离。传统的单层重建方法将衣物锁定于单一身份,而先前的多层方法在处理遮挡区域时存在困难。我们通过将每一层编码为一组二维高斯分布以实现精确几何与逼真渲染,并借助预训练的二维扩散模型通过分数蒸馏采样(SDS)对隐藏区域进行修复,从而克服了上述两种局限。我们的三阶段训练策略首先通过单层重建恢复粗略的规范姿态衣物,随后进行多层训练以联合恢复内层身体与外层衣物的细节。在两个三维人体基准数据集(4D-Dress, Thuman2.0)上的实验表明,相较于现有最优方法,我们的方法在渲染质量、层分解与重组方面表现更优,能够在新颖视角与姿态下实现逼真的虚拟试穿,并推动了面向沉浸式应用的高保真三维人体资产的实际创建。我们的代码公开于 https://github.com/RockyXu66/LayerGS

0
下载
关闭预览

相关内容

Top
微信扫码咨询专知VIP会员