The ever-increasing size of open-source Large Language Models (LLMs) renders local deployment impractical for individual users. Decentralized computing has emerged as a cost-effective solution, allowing individuals and small companies to perform LLM inference for users using surplus computational power. However, a computing provider may stealthily substitute the requested LLM with a smaller, less capable model without consent from users, thereby benefiting from cost savings. We introduce SVIP, a secret-based verifiable LLM inference protocol. Unlike existing solutions based on cryptographic or game-theoretic techniques, our method is computationally effective and does not rest on strong assumptions. Our protocol requires the computing provider to return both the generated text and processed hidden representations from LLMs. We then train a proxy task on these representations, effectively transforming them into a unique model identifier. With our protocol, users can reliably verify whether the computing provider is acting honestly. A carefully integrated secret mechanism further strengthens its security. We thoroughly analyze our protocol under multiple strong and adaptive adversarial scenarios. Our extensive experiments demonstrate that SVIP is accurate, generalizable, computationally efficient, and resistant to various attacks. Notably, SVIP achieves false negative rates below 5% and false positive rates below 3%, while requiring less than 0.01 seconds per prompt query for verification.
翻译:开源大语言模型(LLM)规模的持续增长使得个人用户难以进行本地部署。去中心化计算已成为一种经济高效的解决方案,允许个人和小型公司利用闲置算力为用户提供LLM推理服务。然而,计算服务提供商可能在未经用户同意的情况下,暗中将请求的LLM替换为规模更小、能力更弱的模型,从而非法获取成本收益。本文提出SVIP——一种基于密钥的可验证LLM推理协议。与现有基于密码学或博弈论技术的解决方案不同,我们的方法计算高效且无需依赖强假设。该协议要求计算服务提供商同时返回LLM生成的文本及其处理过程中的隐层表征。我们基于这些表征训练代理任务,将其有效转化为唯一的模型标识符。通过该协议,用户可可靠验证计算服务提供商是否诚实执行任务。精心设计的密钥机制进一步增强了协议的安全性。我们在多种强自适应对抗场景下对协议进行了全面分析。大量实验表明,SVIP具有准确性高、泛化性强、计算效率优异及抗多种攻击的特点。值得注意的是,SVIP在每提示词验证耗时低于0.01秒的前提下,实现了低于5%的误报率与低于3%的误拒率。