The rise of generative Artificial Intelligence (AI) has made detecting AI-generated images a critical challenge for ensuring authenticity. Existing reconstruction-based methods lack theoretical foundations and on empirical heuristics, limiting interpretability and reliability. In this paper, we introduce the Jacobian-Spectral Lower Bound for reconstruction error from a geometric perspective, showing that real images off the reconstruction manifold exhibit a non-trivial error lower bound, while generated images on the manifold have near-zero error. Furthermore, we reveal the limitations of existing methods that rely on static reconstruction error from a single pass. These methods often fail when some real images exhibit lower error than generated ones. This counterintuitive behavior reduces detection accuracy and requires data-specific threshold tuning, limiting their applicability in real-world scenarios. To address these challenges, we propose ReGap, a training-free method that computes dynamic reconstruction error by leveraging structured editing operations to introduce controlled perturbations. This enables measuring error changes before and after editing, improving detection accuracy by enhancing error separation. Experimental results show that our method outperforms existing baselines, exhibits robustness to common post-processing operations and generalizes effectively across diverse conditions.
翻译:生成式人工智能的兴起使得检测AI生成图像成为确保真实性的关键挑战。现有的基于重建的方法缺乏理论基础,依赖于经验性启发式策略,限制了其可解释性与可靠性。本文从几何视角引入重建误差的雅可比谱下界,证明位于重建流形外的真实图像具有非平凡误差下界,而位于流形上的生成图像则具有接近零的误差。进一步,我们揭示了现有方法依赖单次静态重建误差的局限性:当部分真实图像表现出比生成图像更低的误差时,这些方法往往失效。这种反直觉现象会降低检测精度,并需要针对特定数据进行阈值调优,限制了其在真实场景中的适用性。为解决这些问题,我们提出ReGap——一种无需训练的方法,通过利用结构化编辑操作引入受控扰动来计算动态重建误差。该方法能够测量编辑前后的误差变化,通过增强误差分离性提升检测精度。实验结果表明,我们的方法优于现有基线,对常见后处理操作具有鲁棒性,并能有效泛化至多样化条件。