Diffusion models are a powerful class of generative models that produce images and other content from user prompts, but they are computationally intensive. To mitigate this cost, recent academic and industry work has adopted approximate caching, which reuses intermediate states from similar prompts in a cache. While efficient, this optimization introduces new security risks by breaking isolation among users. This paper provides a comprehensive assessment of the security vulnerabilities introduced by approximate caching. First, we demonstrate a remote covert channel established with the approximate cache, where a sender injects prompts with special keywords into the cache system and a receiver can recover that even after days, to exchange information. Second, we introduce a prompt stealing attack using the approximate cache, where an attacker can recover existing cached prompts from hits. Finally, we introduce a poisoning attack that embeds the attacker's logos into the previously stolen prompt, leading to unexpected logo rendering for the requests that hit the poisoned cache prompts. These attacks are all performed remotely through the serving system, demonstrating severe security vulnerabilities in approximate caching. The code for this work is available.
翻译:扩散模型是一类强大的生成模型,能够根据用户提示生成图像和其他内容,但其计算成本高昂。为降低这一成本,近期的学术与工业研究采用了近似缓存技术,该技术通过缓存重用来自相似提示的中间状态。尽管这一优化提高了效率,却因破坏了用户间的隔离而引入了新的安全风险。本文对近似缓存引入的安全漏洞进行了全面评估。首先,我们展示了一种通过近似缓存建立的远程隐蔽信道:发送方将包含特定关键词的提示注入缓存系统,接收方即使在数日后仍能恢复该信息,从而实现信息交换。其次,我们提出了一种利用近似缓存的提示窃取攻击,攻击者能够从缓存命中中恢复已存在的缓存提示。最后,我们介绍了一种投毒攻击,该攻击将攻击者的标识嵌入先前窃取的提示中,导致命中被投毒缓存提示的请求意外渲染出该标识。这些攻击均可通过服务系统远程执行,揭示了近似缓存中存在的严重安全漏洞。本工作的代码已公开。