Regulatory frameworks, such as the EU AI Act, encourage openness of general-purpose AI models by offering legal exemptions for "open-source" models. Despite this legislative attention on openness, the definition of open-source foundation models remains ambiguous. This paper models the strategic interactions among the creator of a general-purpose model (the generalist) and the entity that fine-tunes the general-purpose model to a specialized domain or task (the specialist), in response to regulatory requirements on model openness. We present a stylized model of the regulator's choice of an open-source definition to evaluate which AI openness standards will establish appropriate economic incentives for developers. Our results characterize market equilibria -- specifically, upstream model release decisions and downstream fine-tuning efforts -- under various openness regulations and present a range of effective regulatory penalties and open-source thresholds. Overall, we find the model's baseline performance determines when increasing the regulatory penalty vs. the open-source threshold will significantly alter the generalist's release strategy. Our model provides a theoretical foundation for AI governance decisions around openness and enables evaluation and refinement of practical open-source policies.
翻译:诸如《欧盟人工智能法案》等监管框架通过为“开源”模型提供法律豁免,鼓励通用人工智能模型的开放性。尽管立法层面对开放性给予了关注,但开源基础模型的定义仍然模糊不清。本文针对模型开放性的监管要求,建立了通用模型创建者(通才)与将通用模型微调至特定领域或任务的专业实体(专才)之间的策略互动模型。我们提出了一个关于监管者选择开源定义的形式化模型,以评估何种人工智能开放性标准能为开发者建立适当的经济激励。我们的研究结果刻画了不同开放性监管下的市场均衡——特别是上游模型发布决策和下游微调投入——并提出了一系列有效的监管惩罚措施和开源阈值。总体而言,我们发现模型的基线性能决定了何时提高监管惩罚力度或开源阈值会显著改变通才的发布策略。本模型为围绕开放性的人工智能治理决策提供了理论基础,并支持对实际开源政策进行评估与优化。