AI-generated synthetic media, also called Deepfakes, have significantly influenced so many domains, from entertainment to cybersecurity. Generative Adversarial Networks (GANs) and Diffusion Models (DMs) are the main frameworks used to create Deepfakes, producing highly realistic yet fabricated content. While these technologies open up new creative possibilities, they also bring substantial ethical and security risks due to their potential misuse. The rise of such advanced media has led to the development of a cognitive bias known as Impostor Bias, where individuals doubt the authenticity of multimedia due to the awareness of AI's capabilities. As a result, Deepfake detection has become a vital area of research, focusing on identifying subtle inconsistencies and artifacts with machine learning techniques, especially Convolutional Neural Networks (CNNs). Research in forensic Deepfake technology encompasses five main areas: detection, attribution and recognition, passive authentication, detection in realistic scenarios, and active authentication. This paper reviews the primary algorithms that address these challenges, examining their advantages, limitations, and future prospects.
翻译:人工智能生成的合成媒体,亦称深度伪造(Deepfakes),已对从娱乐到网络安全等诸多领域产生显著影响。生成对抗网络(GANs)和扩散模型(DMs)是创建深度伪造的主要框架,能够生成高度逼真但完全虚构的内容。尽管这些技术开辟了新的创作可能性,但其潜在的滥用也带来了重大的伦理与安全风险。此类先进媒体的兴起导致了一种被称为“冒名顶替偏见”(Impostor Bias)的认知偏差,即个体因意识到人工智能的能力而对多媒体内容的真实性产生怀疑。因此,深度伪造检测已成为一个至关重要的研究领域,其重点在于利用机器学习技术——特别是卷积神经网络(CNNs)——识别细微的不一致性和人工痕迹。深度伪造取证技术的研究涵盖五个主要方向:检测、溯源与识别、被动认证、现实场景下的检测以及主动认证。本文综述了应对这些挑战的主要算法,分析了其优势、局限性与未来前景。