Large language models (LLMs) possess extensive knowledge and question-answering capabilities, having been widely deployed in privacy-sensitive domains like finance and medical consultation. During LLM inferences, cache-sharing methods are commonly employed to enhance efficiency by reusing cached states or responses for the same or similar inference requests. However, we identify that these cache mechanisms pose a risk of private input leakage, as the caching can result in observable variations in response times, making them a strong candidate for a timing-based attack hint. In this study, we propose a novel timing-based side-channel attack to execute input theft in LLMs inference. The cache-based attack faces the challenge of constructing candidate inputs in a large search space to hit and steal cached user queries. To address these challenges, we propose two primary components. The input constructor employs machine learning techniques and LLM-based approaches for vocabulary correlation learning while implementing optimized search mechanisms for generalized input construction. The time analyzer implements statistical time fitting with outlier elimination to identify cache hit patterns, continuously providing feedback to refine the constructor's search strategy. We conduct experiments across two cache mechanisms and the results demonstrate that our approach consistently attains high attack success rates in various applications. Our work highlights the security vulnerabilities associated with performance optimizations, underscoring the necessity of prioritizing privacy and security alongside enhancements in LLM inference.
翻译:大型语言模型(LLM)拥有广泛的知识储备与问答能力,已广泛应用于金融、医疗咨询等隐私敏感领域。在LLM推理过程中,常采用缓存共享方法提升效率,即对相同或相似的推理请求复用已缓存的状态或响应。然而,我们发现这些缓存机制存在私有输入泄露的风险,因为缓存会导致响应时间产生可观测的差异,使其成为基于时序攻击的理想切入点。本研究提出一种新型的基于时序的侧信道攻击方法,用于在LLM推理中实施输入窃取。基于缓存的攻击面临的主要挑战在于:需要在庞大的搜索空间中构建候选输入,以命中并窃取已缓存的用户查询。为应对这些挑战,我们提出两个核心组件。输入构造器采用机器学习技术与基于LLM的方法进行词汇关联学习,同时实施优化的搜索机制以实现泛化输入构建;时序分析器通过统计时间拟合与异常值剔除来识别缓存命中模式,并持续提供反馈以优化构造器的搜索策略。我们在两种缓存机制上进行了实验,结果表明我们的方法在不同应用中均能实现较高的攻击成功率。本研究揭示了性能优化可能引发的安全隐患,强调在提升LLM推理效率的同时,必须同等重视隐私与安全保护。