Generative models have been widely proposed in image recognition to generate more images where the distribution is similar to that of the real ones. It often introduces a discriminator network to differentiate the real data from the generated ones. Such models utilise a discriminator network tasked with differentiating style transferred data from data contained in the target dataset. However in doing so the network focuses on discrepancies in the intensity distribution and may overlook structural differences between the datasets. In this paper we formulate a new image-to-image translation problem to ensure that the structure of the generated images is similar to that in the target dataset. We propose a simple, yet powerful Structure-Unbiased Adversarial (SUA) network which accounts for both intensity and structural differences between the training and test sets when performing image segmentation. It consists of a spatial transformation block followed by an intensity distribution rendering module. The spatial transformation block is proposed to reduce the structure gap between the two images, and also produce an inverse deformation field to warp the final segmented image back. The intensity distribution rendering module then renders the deformed structure to an image with the target intensity distribution. Experimental results show that the proposed SUA method has the capability to transfer both intensity distribution and structural content between multiple datasets.
翻译:生成模型在图像识别领域被广泛提出,用于生成更多与真实图像分布相似的图像。此类模型通常引入一个判别器网络来区分真实数据与生成数据。这些模型利用判别器网络来区分风格迁移后的数据与目标数据集中的数据。然而,这样做会导致网络关注强度分布的差异,而可能忽略数据集之间的结构差异。在本文中,我们提出了一种新的图像到图像转换问题,以确保生成图像的结构与目标数据集中的结构相似。我们提出了一种简单而强大的结构无偏对抗(SUA)网络,该网络在进行图像分割时同时考虑了训练集和测试集之间的强度差异和结构差异。它由一个空间变换块和一个强度分布渲染模块组成。空间变换块旨在减少两幅图像之间的结构差异,并生成一个反向变形场以将最终的分割图像反变换回原始空间。强度分布渲染模块随后将变形后的结构渲染为具有目标强度分布的图像。实验结果表明,所提出的SUA方法能够在多个数据集之间同时迁移强度分布和结构内容。