DeepFakes, which refer to AI-generated media content, have become an increasing concern due to their use as a means for disinformation. Detecting DeepFakes is currently solved with programmed machine learning algorithms. In this work, we investigate the capabilities of multimodal large language models (LLMs) in DeepFake detection. We conducted qualitative and quantitative experiments to demonstrate multimodal LLMs and show that they can expose AI-generated images through careful experimental design and prompt engineering. This is interesting, considering that LLMs are not inherently tailored for media forensic tasks, and the process does not require programming. We discuss the limitations of multimodal LLMs for these tasks and suggest possible improvements.
翻译:深度伪造(DeepFakes)指由人工智能生成的媒体内容,因其被用于传播虚假信息而日益引发关注。当前深度伪造检测主要依靠编程实现的机器学习算法。本研究探索了多模态大语言模型(LLMs)在深度伪造检测中的能力。我们通过定性与定量实验,论证了多模态大语言模型能够通过精心设计的实验方案与提示工程(prompt engineering)识别出AI生成图像。这一发现值得关注,因为大语言模型并非天然适配媒体取证任务,且其应用过程无需编程。我们剖析了多模态大语言模型在此类任务中的局限性,并提出改进方向。