Confidential computing on GPUs, like NVIDIA H100, mitigates the security risks of outsourced Large Language Models (LLMs) by implementing strong isolation and data encryption. Nonetheless, this encryption incurs a significant performance overhead, reaching up to 52.8 percent and 88.2 percent throughput drop when serving OPT-30B and OPT-66B, respectively. To address this challenge, we introduce PipeLLM, a user-transparent runtime system. PipeLLM removes the overhead by overlapping the encryption and GPU computation through pipelining - an idea inspired by the CPU instruction pipelining - thereby effectively concealing the latency increase caused by encryption. The primary technical challenge is that, unlike CPUs, the encryption module lacks prior knowledge of the specific data needing encryption until it is requested by the GPUs. To this end, we propose speculative pipelined encryption to predict the data requiring encryption by analyzing the serving patterns of LLMs. Further, we have developed an efficient, low-cost pipeline relinquishing approach for instances of incorrect predictions. Our experiments on NVIDIA H100 GPU show that compared with vanilla systems without confidential computing (e.g., vLLM, PEFT, and FlexGen), PipeLLM incurs modest overhead (less than 19.6 percent in throughput) across various LLM sizes, from 13B to 175B.
翻译:在GPU(如NVIDIA H100)上进行的保密计算通过实施强隔离与数据加密,有效缓解了外包大语言模型(LLMs)的安全风险。然而,这种加密机制会带来显著的性能开销——在服务OPT-30B和OPT-66B模型时,吞吐量分别下降高达52.8%和88.2%。为应对这一挑战,我们提出了PipeLLM,一个对用户透明的运行时系统。PipeLLM通过流水线方式重叠加密与GPU计算(该设计灵感源自CPU指令流水线),从而有效隐藏加密带来的延迟增长。核心技术难点在于:与CPU不同,加密模块在GPU发出请求前无法预知需要加密的具体数据。为此,我们提出推测流水线加密技术,通过分析LLMs的服务模式来预测需要加密的数据。此外,针对预测错误的情况,我们开发了一种高效低成本的流水线释放机制。在NVIDIA H100 GPU上的实验表明,相较于未采用保密计算的原始系统(如vLLM、PEFT和FlexGen),PipeLLM在13B至175B不同规模的LLM上仅产生较小开销(吞吐量损失低于19.6%)。