Texture models based on Generative Adversarial Networks (GANs) use zero-padding to implicitly encode positional information of the image features. However, when extending the spatial input to generate images at large sizes, zero-padding can often lead to degradation in image quality due to the incorrect positional information at the center of the image. Moreover, zero-padding can limit the diversity within the generated large images. In this paper, we propose a novel approach for generating stochastic texture images at large arbitrary sizes using GANs based on patch-by-patch generation. Instead of zero-padding, the model uses \textit{local padding} in the generator that shares border features between the generated patches; providing positional context and ensuring consistency at the boundaries. The proposed models are trainable on a single texture image and have a constant GPU scalability with respect to the output image size, and hence can generate images of infinite sizes. We show in the experiments that our method has a significant advancement beyond existing GANs-based texture models in terms of the quality and diversity of the generated textures. Furthermore, the implementation of local padding in the state-of-the-art super-resolution models effectively eliminates tiling artifacts enabling large-scale super-resolution. Our code is available at \url{https://github.com/ai4netzero/Infinite_Texture_GANs}.
翻译:基于生成对抗网络(GAN)的纹理模型通常使用零填充来隐式编码图像特征的位置信息。然而,当扩展空间输入以生成大尺寸图像时,零填充常因图像中心位置信息错误而导致图像质量下降。此外,零填充还会限制生成的大尺寸图像内部的多样性。本文提出一种基于逐块生成的新型方法,利用GAN生成任意大尺寸的随机纹理图像。该模型在生成器中采用局部填充而非零填充,通过共享生成块之间的边界特征来提供位置上下文并确保边界一致性。所提出的模型可在单张纹理图像上进行训练,且GPU可扩展性与输出图像尺寸无关,因此能够生成无限尺寸的图像。实验表明,在生成纹理的质量和多样性方面,我们的方法相较于现有基于GAN的纹理模型有显著提升。此外,在先进超分辨率模型中实现局部填充可有效消除拼接伪影,从而实现大规模超分辨率。我们的代码发布于 \url{https://github.com/ai4netzero/Infinite_Texture_GANs}。