It is becoming cheaper to launch disinformation operations at scale using AI-generated content, in particular 'deepfake' technology. We have observed instances of deepfakes in political campaigns, where generated content is employed to both bolster the credibility of certain narratives (reinforcing outcomes) and manipulate public perception to the detriment of targeted candidates or causes (adversarial outcomes). We discuss the threats from deepfakes in politics, highlight model specifications underlying different types of deepfake generation methods, and contribute an accessible evaluation of the efficacy of existing detection methods. We provide this as a summary for lawmakers and civil society actors to understand how the technology may be applied in light of existing policies regulating its use. We highlight the limitations of existing detection mechanisms and discuss the areas where policies and regulations are required to address the challenges of deepfakes.
翻译:利用人工智能生成内容(尤其是“深度伪造”技术)大规模开展虚假信息操作的成本正日益降低。我们已在政治竞选活动中观察到深度伪造的实例,其中生成的內容既被用于增强特定叙事可信度(强化型结果),也被用于操纵公众认知以损害目标候选人或议题(对抗型结果)。本文探讨深度伪造在政治领域的威胁,重点分析不同类型深度伪造生成方法背后的模型技术规范,并对现有检测方法的有效性进行了通俗易懂的评估。我们为立法者及公民社会行动者提供这份总结,以助其理解在现行使用监管政策背景下该技术的潜在应用场景。文中特别指出现有检测机制的局限性,并探讨了需要政策法规介入以应对深度伪造挑战的关键领域。