We introduce FabricDiffusion, a method for transferring fabric textures from a single clothing image to 3D garments of arbitrary shapes. Existing approaches typically synthesize textures on the garment surface through 2D-to-3D texture mapping or depth-aware inpainting via generative models. Unfortunately, these methods often struggle to capture and preserve texture details, particularly due to challenging occlusions, distortions, or poses in the input image. Inspired by the observation that in the fashion industry, most garments are constructed by stitching sewing patterns with flat, repeatable textures, we cast the task of clothing texture transfer as extracting distortion-free, tileable texture materials that are subsequently mapped onto the UV space of the garment. Building upon this insight, we train a denoising diffusion model with a large-scale synthetic dataset to rectify distortions in the input texture image. This process yields a flat texture map that enables a tight coupling with existing Physically-Based Rendering (PBR) material generation pipelines, allowing for realistic relighting of the garment under various lighting conditions. We show that FabricDiffusion can transfer various features from a single clothing image including texture patterns, material properties, and detailed prints and logos. Extensive experiments demonstrate that our model significantly outperforms state-to-the-art methods on both synthetic data and real-world, in-the-wild clothing images while generalizing to unseen textures and garment shapes.
翻译:我们提出了FabricDiffusion,一种将织物纹理从单张服装图像迁移至任意形状三维服装的方法。现有方法通常通过二维到三维的纹理映射或基于生成模型的深度感知修复在服装表面合成纹理。然而,这些方法往往难以捕捉和保持纹理细节,特别是由于输入图像中存在复杂的遮挡、畸变或姿态。受时尚产业中大多数服装通过缝合具有平坦、可重复纹理的裁片来制作的观察启发,我们将服装纹理迁移任务转化为提取无畸变、可平铺的纹理材质,随后将其映射到服装的UV空间。基于这一洞见,我们利用大规模合成数据集训练去噪扩散模型以校正输入纹理图像中的畸变。该过程生成平坦的纹理贴图,使其能够与现有的基于物理的渲染(PBR)材质生成流程紧密耦合,从而实现服装在不同光照条件下的真实感重光照。我们展示了FabricDiffusion能够从单张服装图像中迁移多种特征,包括纹理图案、材质属性以及精细的印花和标志。大量实验表明,我们的模型在合成数据和真实世界野外服装图像上均显著优于现有最先进方法,并能泛化到未见过的纹理和服装形状。