Advances in machine learning have enabled the creation of realistic synthetic videos known as deepfakes. As deepfakes proliferate, concerns about rapid spread of disinformation and manipulation of public perception are mounting. Despite the alarming implications, our understanding of how individuals perceive synthetic media remains limited, obstructing the development of effective mitigation strategies. This paper aims to narrow this gap by investigating human responses to visual and auditory distortions of videos and deepfake-generated visuals and narration. In two between-subjects experiments, we study whether audio-visual distortions affect cognitive processing, such as subjective credibility assessment and objective learning outcomes. A third study reveals that artifacts from deepfakes influence credibility. The three studies show that video distortions and deepfake artifacts can reduce credibility. Our research contributes to the ongoing exploration of the cognitive processes involved in the evaluation and perception of synthetic videos, and underscores the need for further theory development concerning deepfake exposure.
翻译:机器学习的发展使得创建逼真的合成视频(即深度伪造视频)成为可能。随着深度伪造视频的泛滥,人们对虚假信息快速传播和公众认知被操纵的担忧日益加剧。尽管其影响令人担忧,但我们对个体如何感知合成媒体的理解仍然有限,这阻碍了有效缓解策略的制定。本文旨在通过研究人类对视频的视觉与听觉失真、以及深度伪造生成的视觉内容与叙述的反应,来缩小这一差距。在两个被试间实验中,我们探究了视听失真是否会影响认知加工,例如主观可信度评估和客观学习成果。第三项研究表明,深度伪造的伪影会影响可信度。这三项研究表明,视频失真和深度伪造伪影会降低可信度。我们的研究有助于持续探索评估和感知合成视频所涉及的认知过程,并强调了针对深度伪造暴露进行进一步理论发展的必要性。