High-quality 3D garment reconstruction plays a crucial role in mitigating the sim-to-real gap in applications such as digital avatars, virtual try-on and robotic manipulation. However, existing garment reconstruction methods typically rely on unstructured representations, such as 3D Gaussian Splats, struggling to provide accurate reconstructions of garment topology and sewing structures. As a result, the reconstructed outputs are often unsuitable for high-fidelity physical simulation. We propose ReWeaver, a novel framework for topology-accurate 3D garment and sewing pattern reconstruction from sparse multi-view RGB images. Given as few as four input views, ReWeaver predicts seams and panels as well as their connectivities in both the 2D UV space and the 3D space. The predicted seams and panels align precisely with the multi-view images, yielding structured 2D--3D garment representations suitable for 3D perception, high-fidelity physical simulation, and robotic manipulation. To enable effective training, we construct a large-scale dataset GCD-TS, comprising multi-view RGB images, 3D garment geometries, textured human body meshes and annotated sewing patterns. The dataset contains over 100,000 synthetic samples covering a wide range of complex geometries and topologies. Extensive experiments show that ReWeaver consistently outperforms existing methods in terms of topology accuracy, geometry alignment and seam-panel consistency.
翻译:高质量的三维服装重建对于缩小数字人像、虚拟试衣和机器人操作等应用中的仿真与现实差距至关重要。然而,现有的服装重建方法通常依赖于非结构化表示(如三维高斯泼溅),难以准确重建服装的拓扑结构和缝纫结构。因此,重建结果往往不适用于高保真物理仿真。本文提出ReWeaver,一种从稀疏多视角RGB图像重建拓扑精确的三维服装与缝纫图案的新框架。仅需四个输入视图,ReWeaver即可在二维UV空间与三维空间中同步预测缝线与裁片及其连接关系。预测的缝线与裁片与多视角图像精确对齐,生成适用于三维感知、高保真物理仿真和机器人操作的结构化二维-三维服装表示。为支持有效训练,我们构建了大规模数据集GCD-TS,包含多视角RGB图像、三维服装几何、带纹理的人体网格及标注缝纫图案。该数据集涵盖超过100,000个合成样本,覆盖广泛的复杂几何与拓扑结构。大量实验表明,ReWeaver在拓扑精度、几何对齐和缝线-裁片一致性方面均优于现有方法。