Unpaired exemplar-based image-to-image (UEI2I) translation aims to translate a source image to a target image domain with the style of a target image exemplar, without ground-truth input-translation pairs. Existing UEI2I methods represent style using one vector per image or rely on semantic supervision to define one style vector per object. Here, in contrast, we propose to represent style as a dense feature map, allowing for a finer-grained transfer to the source image without requiring any external semantic information. We then rely on perceptual and adversarial losses to disentangle our dense style and content representations. To stylize the source content with the exemplar style, we extract unsupervised cross-domain semantic correspondences and warp the exemplar style to the source content. We demonstrate the effectiveness of our method on four datasets using standard metrics together with a localized style metric we propose, which measures style similarity in a class-wise manner. Our results show that the translations produced by our approach are more diverse, preserve the source content better, and are closer to the exemplars when compared to the state-of-the-art methods. Project page: https://github.com/IVRL/dsi2i
翻译:无配对示例驱动的图像到图像翻译(UEI2I)旨在将源图像转换至目标图像域,同时保留目标图像示例的风格,且无需真值输入-翻译对。现有UEI2I方法通常使用单向量表示每张图像的风格,或依赖语义监督为每个对象定义一个风格向量。与此不同,本文提出以密集特征图表示风格,从而无需外部语义信息即可实现源图像的细粒度风格迁移。我们进一步通过感知损失和对抗损失解耦密集风格与内容表征。为实现以示例风格对源内容进行风格化处理,我们提取无监督跨域语义对应关系,并将示例风格变形映射至源内容。我们在四个数据集上采用标准指标及本文提出的局部风格度量(以类别方式衡量风格相似性)验证方法有效性。结果表明,与现有最优方法相比,本方法生成的翻译结果更具多样性,能更好保留源内容,且更接近示例风格。项目主页:https://github.com/IVRL/dsi2i