State-of-the-art large language models (LLMs) are commonly deployed as online services, necessitating users to transmit informative prompts to cloud servers, thus engendering substantial privacy concerns. In response, we present ConfusionPrompt, a novel private LLM inference framework designed to obfuscate the server by: (i) decomposing the prompt into sub-prompts, and (ii) generating pseudo prompts along with the genuine sub-prompts as input to the online LLM. Eventually, the returned responses can be recomposed by the user to obtain the final whole response. Such designs endows our framework with advantages over previous protocols that (i) it can be seamlessly integrated with existing black-box LLMs, and (ii) it achieves significantly better privacy-utility trade-off than existing text perturbation-based methods. We develop a $(\lambda, \mu, \rho)$-privacy model to formulate the requirement for a privacy-preserving group of prompts, and provide a complexity analysis, affirming ConfusionPrompt's efficiency. Our empirical evaluation reveals that our method offers significantly higher utility compared to local inference methods using open-source models and perturbation-based techniques, while also requiring much less memory than open-source LLMs.
翻译:当前最先进的大语言模型(LLMs)通常作为在线服务部署,这要求用户将信息丰富的提示词传输至云端服务器,从而引发严重的隐私担忧。为此,我们提出ConfusionPrompt,一种新颖的私有LLM推理框架,旨在通过以下方式混淆服务器:(i)将提示词分解为子提示词,以及(ii)生成伪提示词并与真实的子提示词一同作为在线LLM的输入。最终,用户可以对返回的响应进行重组以获得完整的最终响应。这种设计使我们的框架相较于先前协议具有以下优势:(i)能够与现有的黑盒LLM无缝集成,以及(ii)相比现有的基于文本扰动的方法,实现了显著更优的隐私-效用权衡。我们建立了一个$(\lambda, \mu, \rho)$-隐私模型来形式化对隐私保护提示词组的要求,并提供了复杂性分析,证实了ConfusionPrompt的高效性。我们的实证评估表明,与使用开源模型的本地推理方法及基于扰动的技术相比,我们的方法提供了显著更高的效用,同时所需内存也远少于开源LLMs。