Artificial intelligence (AI) model creators commonly attach restrictive terms of use to both their models and their outputs. These terms typically prohibit activities ranging from creating competing AI models to spreading disinformation. Often taken at face value, these terms are positioned by companies as key enforceable tools for preventing misuse, particularly in policy dialogs. But are these terms truly meaningful? There are myriad examples where these broad terms are regularly and repeatedly violated. Yet except for some account suspensions on platforms, no model creator has actually tried to enforce these terms with monetary penalties or injunctive relief. This is likely for good reason: we think that the legal enforceability of these licenses is questionable. This Article systematically assesses of the enforceability of AI model terms of use and offers three contributions. First, we pinpoint a key problem: the artifacts that they protect, namely model weights and model outputs, are largely not copyrightable, making it unclear whether there is even anything to be licensed. Second, we examine the problems this creates for other enforcement. Recent doctrinal trends in copyright preemption may further undermine state-law claims, while other legal frameworks like the DMCA and CFAA offer limited recourse. Anti-competitive provisions likely fare even worse than responsible use provisions. Third, we provide recommendations to policymakers. There are compelling reasons for many provisions to be unenforceable: they chill good faith research, constrain competition, and create quasi-copyright ownership where none should exist. There are, of course, downsides: model creators have fewer tools to prevent harmful misuse. But we think the better approach is for statutory provisions, not private fiat, to distinguish between good and bad uses of AI, restricting the latter.
翻译:人工智能(AI)模型创建者通常对其模型及输出附加限制性使用条款。这些条款通常禁止从创建竞争性AI模型到传播虚假信息等一系列活动。尽管企业常将这些条款视为防止滥用的关键可执行工具(尤其在政策对话中),且其表面价值常被接受,但这些条款是否真正具有意义?存在大量实例表明,这些宽泛条款被频繁且反复违反。然而,除平台上的部分账户封禁外,尚无模型创建者真正尝试通过经济处罚或禁令救济来执行这些条款。这很可能基于充分理由:我们认为这些许可协议的法律可执行性值得商榷。本文系统评估了AI模型使用条款的可执行性,并作出三点贡献。首先,我们指出了一个关键问题:其所保护的成果——即模型权重和模型输出——在很大程度上不受版权保护,这使得是否存在可被许可的内容本身尚不明确。其次,我们探讨了由此导致的其他执行难题。版权优先原则的最新理论趋势可能进一步削弱州法层面的诉求,而《数字千年版权法案》(DMCA)和《计算机欺诈与滥用法案》(CFAA)等其他法律框架提供的救济途径有限。反竞争条款的处境可能比负责任使用条款更为严峻。第三,我们为政策制定者提供建议。许多条款不具备可执行性具有充分理由:它们会抑制善意研究、限制竞争,并在本不应存在的情况下创设准版权所有权。当然,这亦存在弊端:模型创建者将缺乏更多工具来防止有害滥用。但我们认为更优的解决路径在于通过法定条款(而非私人强制规定)来区分AI的良性与恶意用途,并对后者加以限制。