The emergence of ChatGPT marks the arrival of the large language model (LLM) era. While LLMs demonstrate their power in a variety of fields, they also raise serious privacy concerns as the users' queries are sent to the model provider. On the other side, deploying the LLM on the user's device will also leak all the model data. Existing methods based on secure multiparty computation (MPC) managed to protect both the privacy of the model parameters and user queries. However, they require gigabytes of data transfer and several minutes to generate just one token, making them impractical for most real-world applications. To improve the efficiency of private LLM inference, we propose PermLLM, which accelerates the evaluation of non-linear functions using secure random permutation. Along with the optimized secret sharing protocols and homomorphic encryption, PermLLM achieves two-party private inference of the ChatGLM-6B model at the speed of around 3s/token, under a realistic network setting (10ms RTT and 1Gbps bandwidth), which is magnitudes faster than existing MPC solutions.
翻译:ChatGPT的出现标志着大型语言模型(LLM)时代的到来。尽管LLM在多个领域展现出强大能力,但由于用户查询需发送至模型提供商,也引发了严重的隐私担忧。另一方面,将LLM部署在用户设备上同样会导致全部模型数据泄露。现有基于安全多方计算(MPC)的方法虽能同时保护模型参数和用户查询的隐私,但生成单个词元就需要数千兆字节的数据传输和数分钟时间,难以适用于大多数实际场景。为提升隐私LLM推理效率,我们提出PermLLM,通过安全随机置换技术加速非线性函数求值。结合优化的秘密共享协议与同态加密技术,PermLLM在现实网络环境(10ms往返时延、1Gbps带宽)下,能以约3秒/词元的速度完成ChatGLM-6B模型的双向隐私推理,其效率较现有MPC方案实现数量级提升。