Deployed large language models (LLMs) often rely on speculative decoding, a technique that generates and verifies multiple candidate tokens in parallel, to improve throughput and latency. In this work, we reveal a new side-channel whereby input-dependent patterns of correct and incorrect speculations can be inferred by monitoring per-iteration token counts or packet sizes. In evaluations using research prototypes and production-grade vLLM serving frameworks, we show that an adversary monitoring these patterns can fingerprint user queries (from a set of 50 prompts) with over 75% accuracy across four speculative-decoding schemes at temperature 0.3: REST (100%), LADE (91.6%), BiLD (95.2%), and EAGLE (77.6%). Even at temperature 1.0, accuracy remains far above the 2% random baseline - REST (99.6%), LADE (61.2%), BiLD (63.6%), and EAGLE (24%). We also show the capability of the attacker to leak confidential datastore contents used for prediction at rates exceeding 25 tokens/sec. To defend against these, we propose and evaluate a suite of mitigations, including packet padding and iteration-wise token aggregation.
翻译:已部署的大语言模型(LLMs)通常依赖推测解码技术,即并行生成并验证多个候选标记,以提高吞吐量和降低延迟。在本研究中,我们揭示了一种新的侧信道攻击方式:通过监控每次迭代的标记数量或数据包大小,可以推断出正确与错误推测的输入依赖模式。在使用研究原型和生产级vLLM服务框架的评估中,我们证明攻击者通过监控这些模式,能够在温度参数为0.3时,对四种推测解码方案的用户查询(从50个提示词集中)实现超过75%的识别准确率:REST(100%)、LADE(91.6%)、BiLD(95.2%)和EAGLE(77.6%)。即使在温度参数为1.0时,识别准确率仍远高于2%的随机基线——REST(99.6%)、LADE(61.2%)、BiLD(63.6%)和EAGLE(24%)。我们还展示了攻击者能够以超过25标记/秒的速率泄露用于预测的机密数据存储内容。为防御此类攻击,我们提出并评估了一系列缓解措施,包括数据包填充和迭代级标记聚合。