Generating sewing patterns in garment design is receiving increasing attention due to its CG-friendly and flexible-editing nature. Previous sewing pattern generation methods have been able to produce exquisite clothing, but struggle to design complex garments with detailed control. To address these issues, we propose SewingLDM, a multi-modal generative model that generates sewing patterns controlled by text prompts, body shapes, and garment sketches. Initially, we extend the original vector of sewing patterns into a more comprehensive representation to cover more intricate details and then compress them into a compact latent space. To learn the sewing pattern distribution in the latent space, we design a two-step training strategy to inject the multi-modal conditions, \ie, body shapes, text prompts, and garment sketches, into a diffusion model, ensuring the generated garments are body-suited and detail-controlled. Comprehensive qualitative and quantitative experiments show the effectiveness of our proposed method, significantly surpassing previous approaches in terms of complex garment design and various body adaptability. Our project page: https://shengqiliu1.github.io/SewingLDM.
翻译:在服装设计中生成缝纫图案因其对计算机图形学友好且易于灵活编辑的特性而受到越来越多的关注。先前的缝纫图案生成方法虽能制作精美的服装,但在设计具有精细控制的复杂服装时仍面临困难。为解决这些问题,我们提出了SewingLDM,一种多模态生成模型,能够根据文本提示、体型和服装草图生成受控的缝纫图案。首先,我们将原始的缝纫图案向量扩展为更全面的表示,以涵盖更复杂的细节,随后将其压缩至一个紧凑的潜在空间。为学习潜在空间中的缝纫图案分布,我们设计了一种两步训练策略,将多模态条件(即体型、文本提示和服装草图)注入扩散模型,确保生成的服装合身且细节可控。全面的定性与定量实验证明了我们提出方法的有效性,在复杂服装设计和多样体型适应性方面显著超越了先前的方法。我们的项目页面:https://shengqiliu1.github.io/SewingLDM。