High-fidelity 3D garment synthesis from text is desirable yet challenging for digital avatar creation. Recent diffusion-based approaches via Score Distillation Sampling (SDS) have enabled new possibilities but either intricately couple with human body or struggle to reuse. We introduce ClotheDreamer, a 3D Gaussian-based method for generating wearable, production-ready 3D garment assets from text prompts. We propose a novel representation Disentangled Clothe Gaussian Splatting (DCGS) to enable separate optimization. DCGS represents clothed avatar as one Gaussian model but freezes body Gaussian splats. To enhance quality and completeness, we incorporate bidirectional SDS to supervise clothed avatar and garment RGBD renderings respectively with pose conditions and propose a new pruning strategy for loose clothing. Our approach can also support custom clothing templates as input. Benefiting from our design, the synthetic 3D garment can be easily applied to virtual try-on and support physically accurate animation. Extensive experiments showcase our method's superior and competitive performance. Our project page is at https://ggxxii.github.io/clothedreamer.
翻译:从文本生成高保真三维服装对于数字虚拟人创建具有重要价值,但极具挑战性。近期基于分数蒸馏采样(SDS)的扩散方法虽开辟了新途径,但要么与人体模型紧密耦合,要么难以复用。本文提出ClotheDreamer,一种基于3D高斯模型的方法,能够从文本提示生成可直接用于生产的可穿着三维服装资产。我们提出新型表征方法——解耦服装高斯泼溅(DCGS)以实现分离优化。DCGS将着装虚拟人表示为单一高斯模型,同时冻结人体高斯泼溅部分。为提升生成质量与完整性,我们引入双向SDS机制,分别通过姿态条件监督着装虚拟人与服装的RGBD渲染,并提出针对宽松服装的新型剪枝策略。本方法还支持自定义服装模板作为输入。得益于该设计,合成的三维服装可轻松应用于虚拟试穿,并支持物理精确的动画驱动。大量实验证明了本方法优越且具有竞争力的性能。项目页面位于 https://ggxxii.github.io/clothedreamer。