The rapid progress of diffusion models highlights the growing need for detecting generated images. Previous research demonstrates that incorporating diffusion-based measurements, such as reconstruction error, can enhance the generalizability of detectors. However, ignoring the differing impacts of aleatoric and epistemic uncertainty on reconstruction error can undermine detection performance. Aleatoric uncertainty, arising from inherent data noise, creates ambiguity that impedes accurate detection of generated images. As it reflects random variations within the data (e.g., noise in natural textures), it does not help distinguish generated images. In contrast, epistemic uncertainty, which represents the model's lack of knowledge about unfamiliar patterns, supports detection. In this paper, we propose a novel framework, Diffusion Epistemic Uncertainty with Asymmetric Learning~(DEUA), for detecting diffusion-generated images. We introduce Diffusion Epistemic Uncertainty~(DEU) estimation via the Laplace approximation to assess the proximity of data to the manifold of diffusion-generated samples. Additionally, an asymmetric loss function is introduced to train a balanced classifier with larger margins, further enhancing generalizability. Extensive experiments on large-scale benchmarks validate the state-of-the-art performance of our method.
翻译:扩散模型的快速发展凸显了检测生成图像的日益增长的需求。先前的研究表明,结合基于扩散的度量(如重构误差)可以提升检测器的泛化能力。然而,忽略偶然不确定性和认知不确定性对重构误差的不同影响会损害检测性能。偶然不确定性源于数据固有的噪声,其产生的模糊性阻碍了对生成图像的准确检测。由于它反映了数据内部的随机变化(例如,自然纹理中的噪声),它无助于区分生成图像。相反,认知不确定性代表了模型对不熟悉模式的知识缺乏,有助于检测。本文提出了一种新颖的框架——基于非对称学习的扩散认知不确定性(DEUA),用于检测扩散生成的图像。我们通过拉普拉斯近似引入了扩散认知不确定性(DEU)估计,以评估数据与扩散生成样本流形的接近程度。此外,引入了一种非对称损失函数来训练具有更大间隔的平衡分类器,进一步增强了泛化能力。在大规模基准测试上的广泛实验验证了我们方法的最先进性能。