High-fidelity garment modeling remains challenging due to the lack of large-scale, high-quality datasets and efficient representations capable of handling non-watertight, multi-layer geometries. In this work, we introduce Garmage, a neural-network-and-CG-friendly garment representation that seamlessly encodes the accurate geometry and sewing pattern of complex multi-layered garments as a structured set of per-panel geometry images. As a dual-2D-3D representation, Garmage achieves an unprecedented integration of 2D image-based algorithms with 3D modeling workflows, enabling high fidelity, non-watertight, multi-layered garment geometries with direct compatibility for industrial-grade simulations.Built upon this representation, we present GarmageNet, a novel generation framework capable of producing detailed multi-layered garments with body-conforming initial geometries and intricate sewing patterns, based on user prompts or existing in-the-wild sewing patterns. Furthermore, we introduce a robust stitching algorithm that recovers per-vertex stitches, ensuring seamless integration into flexible simulation pipelines for downstream editing of sewing patterns, material properties, and dynamic simulations. Finally, we release an industrial-standard, large-scale, high-fidelity garment dataset featuring detailed annotations, vertex-wise correspondences, and a robust pipeline for converting unstructured production sewing patterns into GarmageNet standard structural assets, paving the way for large-scale, industrial-grade garment generation systems.
翻译:高保真服装建模仍面临挑战,主要源于缺乏大规模高质量数据集以及能够处理非封闭、多层几何结构的高效表示方法。本研究提出Garmage——一种对神经网络与计算机图形学友好的服装表示方法,该方法将复杂多层服装的精确几何结构与缝纫版型编码为结构化面板几何图像集合。作为双二维-三维表示,Garmage实现了二维图像算法与三维建模工作流的深度融合,能够生成高保真、非封闭、多层服装几何体,并直接兼容工业级仿真流程。基于此表示方法,我们构建了GarmageNet生成框架,该框架可根据用户指令或现有真实缝纫版型,生成具有贴合身形初始几何结构与复杂缝纫版型的精细多层服装。此外,我们开发了鲁棒缝合算法以重建顶点级缝合关系,确保其能无缝集成至柔性仿真流程,支持缝纫版型、材料属性与动态仿真的下游编辑。最后,我们发布了符合工业标准的大规模高保真服装数据集,包含精细标注、顶点级对应关系,以及将非结构化生产缝纫版型转换为GarmageNet标准结构资产的完整流程,为构建大规模工业级服装生成系统奠定基础。