The proliferation of deepfake faces poses huge potential negative impacts on our daily lives. Despite substantial advancements in deepfake detection over these years, the generalizability of existing methods against forgeries from unseen datasets or created by emerging generative models remains constrained. In this paper, inspired by the zero-shot advantages of Vision-Language Models (VLMs), we propose a novel approach that repurposes a well-trained VLM for general deepfake detection. Motivated by the model reprogramming paradigm that manipulates the model prediction via data perturbations, our method can reprogram a pretrained VLM model (e.g., CLIP) solely based on manipulating its input without tuning the inner parameters. Furthermore, we insert a pseudo-word guided by facial identity into the text prompt. Extensive experiments on several popular benchmarks demonstrate that (1) the cross-dataset and cross-manipulation performances of deepfake detection can be significantly and consistently improved (e.g., over 88% AUC in cross-dataset setting from FF++ to WildDeepfake) using a pre-trained CLIP model with our proposed reprogramming method; (2) our superior performances are at less cost of trainable parameters, making it a promising approach for real-world applications.
翻译:深度伪造人脸的泛滥对我们的日常生活构成了巨大的潜在负面影响。尽管近年来深度伪造检测取得了显著进展,但现有方法在面对来自未见数据集或新兴生成模型所创建的伪造内容时,其泛化能力仍然受限。本文受视觉语言模型(VLM)零样本优势的启发,提出了一种新颖的方法,将训练有素的VLM重新用于通用深度伪造检测。受通过数据扰动操控模型预测的模型重编程范式启发,我们的方法能够仅通过操控输入(而无需调整内部参数)来重编程预训练的VLM模型(例如CLIP)。此外,我们在文本提示中插入了一个由面部身份信息引导的伪词。在多个流行基准测试上进行的大量实验表明:(1)使用预训练的CLIP模型结合我们提出的重编程方法,可以显著且一致地提升深度伪造检测的跨数据集和跨篡改性能(例如,在从FF++到WildDeepfake的跨数据集设置中,AUC超过88%);(2)我们的优越性能是以较少的可训练参数为代价实现的,这使其成为现实世界应用的一种有前景的方法。