Neural implicit functions have brought impressive advances to the state-of-the-art of clothed human digitization from multiple or even single images. However, despite the progress, current arts still have difficulty generalizing to unseen images with complex cloth deformation and body poses. In this work, we present GarVerseLOD, a new dataset and framework that paves the way to achieving unprecedented robustness in high-fidelity 3D garment reconstruction from a single unconstrained image. Inspired by the recent success of large generative models, we believe that one key to addressing the generalization challenge lies in the quantity and quality of 3D garment data. Towards this end, GarVerseLOD collects 6,000 high-quality cloth models with fine-grained geometry details manually created by professional artists. In addition to the scale of training data, we observe that having disentangled granularities of geometry can play an important role in boosting the generalization capability and inference accuracy of the learned model. We hence craft GarVerseLOD as a hierarchical dataset with levels of details (LOD), spanning from detail-free stylized shape to pose-blended garment with pixel-aligned details. This allows us to make this highly under-constrained problem tractable by factorizing the inference into easier tasks, each narrowed down with smaller searching space. To ensure GarVerseLOD can generalize well to in-the-wild images, we propose a novel labeling paradigm based on conditional diffusion models to generate extensive paired images for each garment model with high photorealism. We evaluate our method on a massive amount of in-the-wild images. Experimental results demonstrate that GarVerseLOD can generate standalone garment pieces with significantly better quality than prior approaches. Project page: https://garverselod.github.io/
翻译:神经隐式函数已为从多张乃至单张图像进行着装人体数字化的前沿技术带来了显著进步。然而,尽管取得了进展,现有方法在泛化到具有复杂服装形变和人体姿态的未见图像时仍存在困难。在本工作中,我们提出了GarVerseLOD,这是一个新的数据集与框架,为实现从单张无约束图像进行高保真三维服装重建的、前所未有的鲁棒性铺平了道路。受近期大型生成模型成功的启发,我们认为解决泛化挑战的一个关键点在于三维服装数据的数量与质量。为此,GarVerseLOD收集了6,000个由专业艺术家手工创建、具有精细几何细节的高质量服装模型。除了训练数据的规模,我们观察到拥有解耦的几何粒度对于提升学习模型的泛化能力和推理精度可以发挥重要作用。因此,我们将GarVerseLOD构建为一个具有细节层次(LOD)的分层数据集,其范围从无细节的风格化形状到带有像素对齐细节的姿态混合服装。这使得我们可以通过将推理分解为更简单的任务(每个任务具有更小的搜索空间)来使这个高度欠约束的问题变得可解。为确保GarVerseLOD能良好地泛化到真实场景图像,我们提出了一种基于条件扩散模型的新型标注范式,为每个服装模型生成大量具有高真实感的配对图像。我们在海量真实场景图像上评估了我们的方法。实验结果表明,GarVerseLOD能够生成独立服装部件,其质量显著优于先前方法。项目页面:https://garverselod.github.io/