In this paper, we adopt a survivor-centered approach to locate and dissect the role of sociotechnical AI governance in preventing AI-Generated Non-Consensual Intimate Images (AIG-NCII) of adults, colloquially known as "deep fake pornography." We identify a "malicious technical ecosystem" or "MTE," comprising of open-source face-swapping models and nearly 200 "nudifying" software programs that allow non-technical users to create AIG-NCII within minutes. Then, using the National Institute of Standards and Technology (NIST) AI 100-4 report as a reflection of current synthetic content governance methods, we show how the current landscape of practices fails to effectively regulate the MTE for adult AIG-NCII, as well as flawed assumptions explaining these gaps.
翻译:本文采用以幸存者为中心的研究方法,定位并剖析社会技术性AI治理在防范成人AI生成非自愿私密图像(俗称"深度伪造色情内容")中的作用。我们识别出一个由开源换脸模型和近200款"脱衣"软件程序构成的"恶意技术生态系统",该系统使得非技术用户能在数分钟内生成AIG-NCII。随后,以美国国家标准与技术研究院(NIST)AI 100-4报告作为当前合成内容治理方法的参照,我们揭示了现行实践体系如何未能有效规制针对成人AIG-NCII的MTE,并剖析了导致这些治理缺位的错误假设。