Large language models (LLMs) have demonstrated exceptional capabilities in text understanding and generation, and they are increasingly being utilized across various domains to enhance productivity. However, due to the high costs of training and maintaining these models, coupled with the fact that some LLMs are proprietary, individuals often rely on online AI as a Service (AIaaS) provided by LLM companies. This business model poses significant privacy risks, as service providers may exploit users' trace patterns and behavioral data. In this paper, we propose a practical and privacy-preserving framework that ensures user anonymity by preventing service providers from linking requests to the individuals who submit them. Our framework is built on partially blind signatures, which guarantee the unlinkability of user requests. Furthermore, we introduce two strategies tailored to both subscription-based and API-based service models, ensuring the protection of both users' privacy and service providers' interests. The framework is designed to integrate seamlessly with existing LLM systems, as it does not require modifications to the underlying architectures. Experimental results demonstrate that our framework incurs minimal computation and communication overhead, making it a feasible solution for real-world applications.
翻译:大型语言模型(LLM)在文本理解和生成方面展现出卓越能力,正日益广泛地应用于各领域以提升生产效率。然而,由于训练和维护这些模型的高昂成本,加之部分LLM属于专有模型,个人用户通常依赖LLM公司提供的在线人工智能即服务(AIaaS)。这种商业模式带来了显著的隐私风险,因为服务提供商可能利用用户的访问模式和行为数据。本文提出一种实用且保护隐私的框架,通过防止服务提供商将请求与提交者相关联来确保用户匿名性。该框架基于部分盲签名技术构建,可保证用户请求的不可关联性。此外,我们针对订阅制和API调用两种服务模式分别设计了相应策略,在保护用户隐私的同时兼顾服务提供商的利益。该框架无需修改现有LLM系统底层架构,可实现无缝集成。实验结果表明,我们的框架仅产生可忽略的计算与通信开销,为现实应用提供了可行的解决方案。