In 2022, AI image generators crossed a key threshold, enabling much more efficient and dynamic production of photorealistic deepfake images than before. This enabled opportunities for creative and positive uses of these models. However, it also enabled unprecedented opportunities for the low-effort creation of AI-generated non-consensual intimate imagery (AIG-NCII), including AI-generated child sexual abuse material (AIG-CSAM). Empirically, these harms were principally enabled by a small number of models that were trained on web data with pornographic content, released with open weights, and insufficiently safeguarded. In this paper, we observe ways in which the same patterns are emerging with video generation models in 2025. Specifically, we analyze how a small number of open-weight AI video generation models have become the dominant tools for videorealistic AIG-NCII video generation. We then analyze the literature on model safeguards and conclude that (1) developers who openly release the weights of capable video generation models without appropriate data curation and/or post-training safeguards foreseeably contribute to mitigatable downstream harm, and (2) model distribution platforms that do not proactively moderate individual misuse or models designed for AIG-NCII foreseeably amplify this harm. While there are no perfect defenses against AIG-NCII and AIG-CSAM from open-weight AI models, we argue that risk management by model developers and distributors, informed by emerging safeguard techniques, will substantially affect the future ease of creating AIG-NCII and AIG-CSAM with generative AI video tools.
翻译:2022年,AI图像生成器跨越了一个关键阈值,使得生成逼真深度伪造图像的效率和动态性远超以往。这为这些模型的创造性和积极应用带来了机遇。然而,这也为低门槛创建AI生成的非自愿亲密图像(AIG-NCII),包括AI生成的儿童性虐待材料(AIG-CSAM),提供了前所未有的机会。实证研究表明,这些危害主要由少数模型所促成,这些模型在包含色情内容的网络数据上训练而成,以开放权重发布,且防护措施不足。本文中,我们观察到2025年视频生成模型正在出现相同的模式。具体而言,我们分析了少数开放权重的AI视频生成模型如何成为视频级逼真AIG-NCII视频生成的主导工具。随后,我们梳理了关于模型防护措施的文献,并得出结论:(1)开发者在未进行适当数据筛选和/或训练后防护的情况下,公开发布高性能视频生成模型的权重,可预见地导致了本可缓解的下游危害;(2)未主动监管个体滥用行为或专为AIG-NCII设计的模型的模型分发平台,可预见地放大了这种危害。尽管针对开放权重AI模型生成的AIG-NCII和AIG-CSAM尚无完美防御方案,但我们认为,模型开发者和分发者基于新兴防护技术的风险管理,将显著影响未来利用生成式AI视频工具创建AIG-NCII和AIG-CSAM的难易程度。