Large Language Models (LLMs) represent valuable intellectual property (IP), reflecting significant investments in training data, compute, and expertise. Deploying these models on partially trusted or insecure devices introduces substantial risk of model theft, making it essential to design inference protocols with provable security guarantees. We present the formal framework and security foundations of SLIP, a hybrid inference protocol that splits model computation between a trusted and an untrusted resource. We define and analyze the key notions of model decomposition and hybrid inference protocols, and introduce formal properties including safety, correctness, efficiency, and t-soundness. We construct secure inference protocols based on additive decompositions of weight matrices, combined with masking and probabilistic verification techniques. We prove that these protocols achieve information-theoretic security against honest-but-curious adversaries, and provide robustness against malicious adversaries with negligible soundness error. This paper focuses on the theoretical underpinnings of SLIP: precise definitions, formal protocols, and proofs of security. Empirical validation and decomposition heuristics appear in the companion SLIP paper. Together, the two works provide a complete account of securing LLM IP via hybrid inference, bridging both practice and theory.
翻译:大型语言模型(LLMs)作为重要的知识产权(IP),体现了在训练数据、计算资源和专业知识方面的巨大投入。将这些模型部署在部分可信或不安全的设备上会带来模型被盗的重大风险,因此设计具有可证明安全保证的推理协议至关重要。本文提出了SLIP的形式化框架与安全基础,这是一种将模型计算分割在可信与不可信资源之间的混合推理协议。我们定义并分析了模型分解与混合推理协议的核心概念,引入了包括安全性、正确性、效率性及t-可靠性在内的形式化属性。基于权重矩阵的加法分解,结合掩码技术与概率验证方法,我们构建了安全的推理协议。我们证明了这些协议在诚实但好奇的敌手面前可实现信息论安全,并对恶意敌手具有可忽略的可靠性误差的鲁棒性。本文聚焦于SLIP的理论基础:精确定义、形式化协议及安全证明。实证验证与分解启发式方法详见SLIP的姊妹篇。两项工作共同构建了通过混合推理保护LLM知识产权的完整体系,实现了理论与实践的统一。