Large language models (LLMs), like ChatGPT, have greatly simplified text generation tasks. However, they have also raised concerns about privacy risks such as data leakage and unauthorized data collection. Existing solutions for privacy-preserving inference face practical challenges related to computation time and communication costs. In this paper, we propose InferDPT, the first practical framework for the privacy-preserving Inference of black-box LLMs, implementing Differential Privacy in Text generation. InferDPT comprises two key modules: the "perturbation module" utilizes the exponential mechanism to generate a perturbed prompt, facilitating privacy-preserving inference with black-box LLMs, and the "extraction module", inspired by knowledge distillation and retrieval-augmented generation, extracts coherent and consistent text from the perturbed generation result, ensuring successful text generation completion. To address privacy concerns related to previous exponential mechanisms' susceptibility to embedding revision attacks, we introduce RANTEXT, a novel differential privacy mechanism integrated into the perturbation module of InferDPT, which introduces the concept of "RANdom adjacency" for TEXT perturbation within the prompt. Experimental results across three datasets demonstrate that the text generation quality of InferDPT is comparable to that of non-private GPT-4, and RANTEXT surpasses existing state-of-the-art mechanisms, namely, SANTEXT+ and CUSTEXT+ in the trade-off between privacy and utility. Even with an privacy parameter epsilon value of 6.0, RANTEXT achieves an average privacy protection rate exceeding 90% against embedding revision attacks, which is 0.58 times higher than that of SANTEXT+ and 3.35 times higher than that of CUSTEXT+.
翻译:以ChatGPT为代表的大语言模型极大地简化了文本生成任务,但也引发了数据泄露、未授权数据收集等隐私风险担忧。现有的隐私保护推理方案在计算时间和通信成本方面面临实际挑战。本文提出了InferDPT,这是首个面向黑盒大语言模型隐私保护推理的实用框架,实现了文本生成中的差分隐私。InferDPT包含两个核心模块:"扰动模块"利用指数机制生成扰动后的提示词,以实现与黑盒大语言模型的隐私保护推理;"提取模块"受知识蒸馏和检索增强生成技术启发,从扰动生成结果中提取连贯一致的文本,确保文本生成任务的成功完成。针对先前指数机制易受嵌入修订攻击的隐私隐患,我们在InferDPT的扰动模块中集成了新型差分隐私机制RANTEXT,该机制在提示词内部引入了"随机邻接"的文本扰动概念。在三个数据集上的实验结果表明,InferDPT的文本生成质量与非隐私保护的GPT-4相当,且RANTEXT在隐私与效用的权衡中超越了现有最优机制SANTEXT+和CUSTEXT+。即使在隐私参数ε值为6.0时,RANTEXT针对嵌入修订攻击的平均隐私保护率仍超过90%,分别是SANTEXT+的1.58倍和CUSTEXT+的4.35倍。