The current paradigm of AI model distribution presents a fundamental dichotomy: models are either closed and API-gated, sacrificing transparency and local execution, or openly distributed, sacrificing monetization and control. We introduce OML(Open-access, Monetizable, and Loyal AI Model Serving), a primitive that enables a new distribution paradigm where models can be freely distributed for local execution while maintaining cryptographically enforced usage authorization. We are the first to introduce and formalize this problem, introducing rigorous security definitions tailored to the unique challenge of white-box model protection: model extraction resistance and permission forgery resistance. We prove fundamental bounds on the achievability of OML properties and characterize the complete design space of potential constructions, from obfuscation-based approaches to cryptographic solutions. To demonstrate practical feasibility, we present OML 1.0, a novel OML construction leveraging AI-native model fingerprinting coupled with crypto-economic enforcement mechanisms. Through extensive theoretical analysis and empirical evaluation, we establish OML as a foundational primitive necessary for sustainable AI ecosystems. This work opens a new research direction at the intersection of cryptography, machine learning, and mechanism design, with critical implications for the future of AI distribution and governance.
翻译:当前AI模型分发范式存在一个根本性的二分法:模型要么是封闭的、通过API访问的,牺牲了透明性和本地执行能力;要么是开放分发的,牺牲了盈利能力和控制权。我们提出OML(开放访问、可盈利且忠诚的AI模型服务),这是一种能够实现新型分发范式的原语,使得模型可以在保持加密强制使用授权的同时,自由分发以供本地执行。我们首次提出并形式化了这一问题,针对白盒模型保护这一独特挑战,引入了严格的安全定义:模型提取抵抗性和权限伪造抵抗性。我们证明了OML属性可实现性的基本界限,并刻画了潜在构建方案的完整设计空间,从基于混淆的方法到密码学解决方案。为了证明实际可行性,我们提出了OML 1.0,这是一种新颖的OML构建方案,它利用AI原生模型指纹技术与加密经济执行机制相结合。通过广泛的理论分析和实证评估,我们确立了OML作为可持续AI生态系统所必需的基础原语。这项工作在密码学、机器学习和机制设计的交叉领域开辟了一个新的研究方向,对AI分发与治理的未来具有重要影响。