Advances in machine learning have enabled the creation of realistic synthetic videos known as deepfakes. As deepfakes proliferate, concerns about rapid spread of disinformation and manipulation of public perception are mounting. Despite the alarming implications, our understanding of how individuals perceive synthetic media remains limited, obstructing the development of effective mitigation strategies. This paper aims to narrow this gap by investigating human responses to visual and auditory distortions of videos and deepfake-generated visuals and narration. In two between-subjects experiments, we study whether audio-visual distortions affect cognitive processing, such as subjective credibility assessment and objective learning outcomes. A third study reveals that artifacts from deepfakes influence credibility. The three studies show that video distortions and deepfake artifacts can reduce credibility. Our research contributes to the ongoing exploration of the cognitive processes involved in the evaluation and perception of synthetic videos, and underscores the need for further theory development concerning deepfake exposure.
翻译:机器学习的发展催生了被称为深度伪造的逼真合成视频。随着深度伪造的泛滥,对虚假信息快速传播和公众认知被操纵的担忧日益加剧。尽管其影响令人担忧,但我们对个体如何感知合成媒体的理解仍然有限,这阻碍了有效缓解策略的开发。本文旨在通过研究人类对视频的视觉与听觉失真、以及深度伪造生成的视觉内容与叙事的反应,来缩小这一差距。在两个被试间实验中,我们探究了视听失真是否会影响认知加工,例如主观可信度评估和客观学习效果。第三项研究表明,深度伪造的伪影会影响可信度。这三项研究共同表明,视频失真和深度伪造伪影会降低可信度。我们的研究有助于持续探索评估与感知合成视频所涉及的认知过程,并强调了针对深度伪造暴露进行进一步理论发展的必要性。