We investigate the task of adapting image generative models to different datasets without finetuneing. To this end, we introduce Semantica, an image-conditioned diffusion model capable of generating images based on the semantics of a conditioning image. Semantica is trained exclusively on web-scale image pairs, that is it receives a random image from a webpage as conditional input and models another random image from the same webpage. Our experiments highlight the expressivity of pretrained image encoders and necessity of semantic-based data filtering in achieving high-quality image generation. Once trained, it can adaptively generate new images from a dataset by simply using images from that dataset as input. We study the transfer properties of Semantica on ImageNet, LSUN Churches, LSUN Bedroom and SUN397.
翻译:本文研究了在不进行微调的情况下,将图像生成模型适配到不同数据集的任务。为此,我们提出了Semantica,这是一种能够基于条件图像的语义生成新图像的图像条件扩散模型。Semantica的训练完全依赖于网络规模的图像对,即它接收来自网页的随机图像作为条件输入,并对来自同一网页的另一张随机图像进行建模。我们的实验突显了预训练图像编码器的表达能力以及基于语义的数据过滤对于实现高质量图像生成的必要性。模型训练完成后,只需使用目标数据集中的图像作为输入,即可自适应地生成该数据集风格的新图像。我们在ImageNet、LSUN Churches、LSUN Bedroom和SUN397数据集上研究了Semantica的迁移特性。