The ultimate goal of generative models is to characterize the data distribution perfectly. For image generation, common metrics of visual quality (e.g., FID), and the truthlikeness of generated images to the human eyes seem to suggest that we are close to achieving it. However, through distribution classification tasks, we find that, in the eyes of classifiers parameterized by neural networks, the strongest diffusion models are still far from this goal. Specifically, classifiers consistently and effortlessly distinguish between real and generated images in various settings. Further, we observe an intriguing discrepancy: classifiers can identify differences between diffusion models with similar performance (e.g., U-ViT-H vs. DiT-XL), but struggle to differentiate between the smallest and largest models in the same family (e.g., EDM2-XS vs. EDM2-XXL), whereas humans exhibit the opposite tendency. As an explanation, our comprehensive empirical study suggests that, unlike humans, classifiers tend to classify images through edge and high-frequency components. We believe that our methodology can serve as a probe to understand how generative models work and inspire further thought on how existing models can be improved and how the abuse of such models can be prevented.
翻译:生成模型的终极目标是完美刻画数据分布。对于图像生成而言,常见的视觉质量指标(如FID)以及生成图像在人眼看来与真实图像的相似度,似乎表明我们已接近实现这一目标。然而,通过分布分类任务,我们发现,在由神经网络参数化的分类器看来,最强大的扩散模型仍远未达成此目标。具体而言,分类器在各种设置下均能持续且轻松地区分真实图像与生成图像。此外,我们观察到一个有趣的差异:分类器能够识别性能相近的扩散模型之间的区别(例如U-ViT-H与DiT-XL),却难以区分同一模型系列中最小与最大模型之间的差异(例如EDM2-XS与EDM2-XXL),而人类则表现出相反的趋势。作为解释,我们全面的实证研究表明,与人类不同,分类器倾向于通过边缘和高频分量对图像进行分类。我们相信,我们的方法可作为探究生成模型工作原理的探针,并启发进一步思考如何改进现有模型以及如何防止此类模型的滥用。