Diffusion models (DMs) have shown remarkable capabilities in generating realistic high-quality images, audios, and videos. They benefit significantly from extensive pre-training on large-scale datasets, including web-crawled data with paired data and conditions, such as image-text and image-class pairs. Despite rigorous filtering, these pre-training datasets often inevitably contain corrupted pairs where conditions do not accurately describe the data. This paper presents the first comprehensive study on the impact of such corruption in pre-training data of DMs. We synthetically corrupt ImageNet-1K and CC3M to pre-train and evaluate over 50 conditional DMs. Our empirical findings reveal that various types of slight corruption in pre-training can significantly enhance the quality, diversity, and fidelity of the generated images across different DMs, both during pre-training and downstream adaptation stages. Theoretically, we consider a Gaussian mixture model and prove that slight corruption in the condition leads to higher entropy and a reduced 2-Wasserstein distance to the ground truth of the data distribution generated by the corruptly trained DMs. Inspired by our analysis, we propose a simple method to improve the training of DMs on practical datasets by adding condition embedding perturbations (CEP). CEP significantly improves the performance of various DMs in both pre-training and downstream tasks. We hope that our study provides new insights into understanding the data and pre-training processes of DMs and all models are released at https://huggingface.co/DiffusionNoise.
翻译:扩散模型在生成逼真的高质量图像、音频和视频方面展现出卓越能力。这类模型很大程度上受益于对大规模数据集的广泛预训练,这些数据集包含具有配对数据和条件的网络爬取数据,如图像-文本对和图像-类别对。尽管经过严格筛选,这些预训练数据集仍不可避免地包含条件无法准确描述数据的污染配对。本文首次系统研究了此类污染对扩散模型预训练数据的影响。我们通过合成污染ImageNet-1K和CC3M数据集来预训练并评估超过50个条件扩散模型。实证结果表明,预训练中各种类型的轻微污染能显著提升不同扩散模型生成图像的质量、多样性和保真度,这一效果在预训练阶段和下游适应阶段均得到验证。理论上,我们通过高斯混合模型证明:条件中的轻微污染会导致经污染训练的扩散模型生成的数据分布具有更高熵值,并减小其与真实数据分布之间的2-Wasserstein距离。基于理论分析,我们提出一种通过添加条件嵌入扰动来改进扩散模型在实际数据集上训练的简易方法。该方法能显著提升各类扩散模型在预训练和下游任务中的性能。本研究为理解扩散模型的数据与预训练过程提供了新视角,所有模型均已发布于https://huggingface.co/DiffusionNoise。