This paper argues that a one-size-fits-all approach to specifying consent for the use of creative works in generative AI is insufficient. Real-world ownership and rights holder structures, the imitation of artistic styles and likeness, and the limitless contexts of use of AI outputs make the status quo of binary consent with opt-in by default untenable. To move beyond the current impasse, we consider levers of control in generative AI workflows at training, inference, and dissemination. Based on these insights, we position inference-time opt-in as an overlooked opportunity for nuanced consent verification. We conceptualize nuanced consent conditions for opt-in and propose an agent-based inference-time opt-in architecture to verify if user intent requests meet conditional consent granted by rights holders. In a case study for music, we demonstrate that nuanced opt-in at inference can account for established rights and re-establish a balance of power between rights holders and AI developers.
翻译:本文认为,在生成式人工智能中使用创意作品时,采用“一刀切”的同意指定方式是不充分的。现实世界中的所有权与权利人结构、艺术风格与肖像的模仿,以及人工智能输出结果无限的使用场景,都使得当前默认选择加入的二元同意模式难以为继。为走出当前僵局,我们探讨了生成式人工智能工作流程中在训练、推理和传播阶段的可控杠杆。基于这些见解,我们将推理时选择加入定位为一个被忽视的、可实现精细化同意验证的机遇。我们构思了用于选择加入的精细化同意条件,并提出了一种基于智能体的推理时选择加入架构,以验证用户意图请求是否符合权利人授予的条件性同意。在一项针对音乐的案例研究中,我们证明了在推理阶段实施精细化选择加入能够兼顾已确立的权利,并重新建立权利人与人工智能开发者之间的权力平衡。