Recent advances in generative deep learning have enabled the creation of high-quality synthetic images in text-to-image generation. Prior work shows that fine-tuning a pretrained diffusion model on ImageNet and generating synthetic training images from the finetuned model can enhance an ImageNet classifier's performance. However, performance degrades as synthetic images outnumber real ones. In this paper, we explore whether generative fine-tuning is essential for this improvement and whether it is possible to further scale up training using more synthetic data. We present a new framework leveraging off-the-shelf generative models to generate synthetic training images, addressing multiple challenges: class name ambiguity, lack of diversity in naive prompts, and domain shifts. Specifically, we leverage large language models (LLMs) and CLIP to resolve class name ambiguity. To diversify images, we propose contextualized diversification (CD) and stylized diversification (SD) methods, also prompted by LLMs. Finally, to mitigate domain shifts, we leverage domain adaptation techniques with auxiliary batch normalization for synthetic images. Our framework consistently enhances recognition model performance with more synthetic data, up to 6x of original ImageNet size showcasing the potential of synthetic data for improved recognition models and strong out-of-domain generalization.
翻译:生成式深度学习的最新进展使得通过文本到图像生成技术创建高质量合成图像成为可能。先前的研究表明,在ImageNet上对预训练的扩散模型进行微调,并从微调后的模型生成合成训练图像,可以提升ImageNet分类器的性能。然而,当合成图像数量超过真实图像时,性能会出现下降。本文探讨了生成式微调对于这种改进是否必要,以及是否可能使用更多合成数据进一步扩展训练规模。我们提出了一种新框架,利用现成的生成模型生成合成训练图像,解决了多个挑战:类别名称歧义、简单提示缺乏多样性以及域偏移。具体而言,我们利用大型语言模型(LLMs)和CLIP来解决类别名称歧义问题。为了增加图像多样性,我们提出了由LLMs引导的上下文多样化(CD)和风格化多样化(SD)方法。最后,为了缓解域偏移,我们采用域适应技术,并为合成图像引入了辅助批量归一化。我们的框架能够持续提升识别模型的性能,即使合成数据量增至原始ImageNet规模的6倍,这展示了合成数据在改进识别模型和实现强大跨域泛化能力方面的潜力。