Image-based virtual try-on is an increasingly popular and important task to generate realistic try-on images of the specific person. Recent methods model virtual try-on as image mask-inpaint task, which requires masking the person image and results in significant loss of spatial information. Especially, for in-the-wild try-on scenarios with complex poses and occlusions, mask-based methods often introduce noticeable artifacts. Our research found that a mask-free approach can fully leverage spatial and lighting information from the original person image, enabling high-quality virtual try-on. Consequently, we propose a novel training paradigm for a mask-free try-on diffusion model. We ensure the model's mask-free try-on capability by creating high-quality pseudo-data and further enhance its handling of complex spatial information through effective in-the-wild data augmentation. Besides, a try-on localization loss is designed to concentrate on try-on area while suppressing garment features in non-try-on areas, ensuring precise rendering of garments and preservation of fore/back-ground. In the end, we introduce BooW-VTON, the mask-free virtual try-on diffusion model, which delivers SOTA try-on quality without parsing cost. Extensive qualitative and quantitative experiments have demonstrated superior performance in wild scenarios with such a low-demand input.
翻译:基于图像的虚拟试穿是一项日益流行且重要的任务,旨在为特定人物生成逼真的试穿图像。现有方法通常将虚拟试穿建模为图像掩码修复任务,这需要对人物图像进行掩码处理,导致大量空间信息丢失。尤其在具有复杂姿态和遮挡的野外试穿场景中,基于掩码的方法常会引入明显的伪影。我们的研究发现,无掩码方法能够充分利用原始人物图像的空间与光照信息,从而实现高质量的虚拟试穿。为此,我们提出了一种用于无掩码试穿扩散模型的新型训练范式。我们通过构建高质量伪数据确保模型的无掩码试穿能力,并借助有效的野外数据增强进一步提升其处理复杂空间信息的能力。此外,我们设计了试穿定位损失函数,以聚焦试穿区域并抑制非试穿区域的服装特征,从而确保服装的精确渲染及前景/背景的完整保留。最终,我们提出了BooW-VTON——一种无需掩码的虚拟试穿扩散模型,该模型在不依赖解析成本的情况下实现了最先进的试穿质量。大量定性与定量实验表明,该模型在输入要求极低的情况下,在野外场景中展现出卓越的性能。