Generative propaganda is the use of generative artificial intelligence (AI) to shape public opinion. To characterize its use in real-world settings, we conducted interviews with defenders (e.g., factcheckers, journalists, officials) in Taiwan and creators (e.g., influencers, political consultants, advertisers) as well as defenders in India, centering two places characterized by high levels of online propaganda. The term "deepfakes", we find, exerts outsized discursive power in shaping defenders' expectations of misuse and, in turn, the interventions that are prioritized. To better characterize the space of generative propaganda, we develop a taxonomy that distinguishes between obvious versus hidden and promotional versus derogatory use. Deception was neither the main driver nor the main impact vector of AI's use; instead, Indian creators sought to persuade rather than to deceive, often making AI's use obvious in order to reduce legal and reputational risks, while Taiwan's defenders saw deception as a subset of broader efforts to distort the prevalence of strategic narratives online. AI was useful and used, however, in producing efficiency gains in communicating across languages and modes, and in evading human and algorithmic detection. Security researchers should reconsider threat models to clearly differentiate deepfakes from promotional and obvious uses, to complement and bolster the social factors that constrain misuse by internal actors, and to counter efficiency gains globally.
翻译:生成式宣传是指利用生成式人工智能(AI)来塑造公众舆论。为刻画其在现实场景中的应用,我们以线上宣传水平较高的两个地区为中心,对台湾的防御者(如事实核查员、记者、官员)和创作者(如网红、政治顾问、广告商)以及印度的防御者进行了访谈。研究发现,“深度伪造”这一术语在塑造防御者对滥用的预期以及由此优先采取的干预措施方面,具有超乎寻常的话语影响力。为更好地刻画生成式宣传的范畴,我们构建了一个分类体系,区分了显性与隐性使用、推广性与贬损性使用。欺骗既非AI使用的主要驱动力,也非其主要影响路径;相反,印度的创作者旨在说服而非欺骗,并常使AI的使用显而易见以降低法律和声誉风险,而台湾的防御者则将欺骗视为更广泛的扭曲线上战略叙事普遍性努力的一个子集。然而,AI在跨语言和跨模态沟通中实现效率提升,以及在规避人类和算法检测方面,确实具有实用价值并被广泛使用。安全研究人员应重新审视威胁模型,明确区分深度伪造与推广性及显性使用,以补充和加强约束内部行为者滥用的社会因素,并应对全球范围内的效率提升挑战。