The rapid advancement of image-generation technologies has made it possible for anyone to create photorealistic images using generative models, raising significant security concerns. To mitigate malicious use, tracing the origin of such images is essential. Reconstruction-based attribution methods offer a promising solution, but they often suffer from reduced accuracy and high computational costs when applied to state-of-the-art (SOTA) models. To address these challenges, we propose AEDR (AutoEncoder Double-Reconstruction), a novel training-free attribution method designed for generative models with continuous autoencoders. Unlike existing reconstruction-based approaches that rely on the value of a single reconstruction loss, AEDR performs two consecutive reconstructions using the model's autoencoder, and adopts the ratio of these two reconstruction losses as the attribution signal. This signal is further calibrated using the image homogeneity metric to improve accuracy, which inherently cancels out absolute biases caused by image complexity, with autoencoder-based reconstruction ensuring superior computational efficiency. Experiments on eight top latent diffusion models show that AEDR achieves 25.5% higher attribution accuracy than existing reconstruction-based methods, while requiring only 1% of the computational time.
翻译:图像生成技术的快速发展使得任何人都能利用生成模型创建逼真的图像,这引发了严重的安全隐患。为遏制恶意使用,对此类图像进行溯源至关重要。基于重构的溯源方法提供了一种有前景的解决方案,但在应用于最先进模型时,往往面临精度下降和计算成本高昂的问题。为应对这些挑战,我们提出AEDR(自编码器双重重构)——一种专为具有连续自编码器的生成模型设计的、无需训练的新型溯源方法。与现有基于单次重构损失值的重构方法不同,AEDR利用模型的自编码器执行两次连续重构,并采用两次重构损失的比值作为溯源信号。该信号进一步通过图像均匀性度量进行校准以提升精度,该方法本质上消除了由图像复杂度引起的绝对偏差,同时基于自编码器的重构机制确保了卓越的计算效率。在八种顶级潜在扩散模型上的实验表明,AEDR比现有基于重构的方法实现了25.5%的溯源精度提升,而仅需1%的计算时间。