Deepfake technologies are powerful tools that can be misused for malicious purposes such as spreading disinformation on social media. The effectiveness of such malicious applications depends on the ability of deepfakes to deceive their audience. Therefore, researchers have investigated human abilities to detect deepfakes in various studies. However, most of these studies were conducted with participants who focused exclusively on the detection task; hence the studies may not provide a complete picture of human abilities to detect deepfakes under realistic conditions: Social media users are exposed to cognitive load on the platform, which can impair their detection abilities. In this paper, we investigate the influence of cognitive load on human detection abilities of voice-based deepfakes in an empirical study with 30 participants. Our results suggest that low cognitive load does not generally impair detection abilities, and that the simultaneous exposure to a secondary stimulus can actually benefit people in the detection task.
翻译:深度伪造技术是强大的工具,可能被滥用于恶意目的,如在社交媒体上传播虚假信息。此类恶意应用的有效性取决于深度伪造欺骗受众的能力。因此,研究人员已在多项研究中探讨了人类检测深度伪造的能力。然而,这些研究大多是在参与者专注于检测任务的情况下进行的;因此,这些研究可能无法全面反映人类在现实条件下检测深度伪造的能力:社交媒体用户在平台上承受着认知负荷,这可能损害其检测能力。本文通过一项包含30名参与者的实证研究,探讨了认知负荷对人类检测语音深度伪造能力的影响。我们的结果表明,低认知负荷通常不会损害检测能力,并且同时暴露于次要刺激实际上可能有助于人们在检测任务中的表现。