Since local LLM inference on resource-constrained edge devices imposes a severe performance bottleneck, this paper proposes distributed prompt caching to enhance inference performance by cooperatively sharing intermediate processing states across multiple low-end edge devices. To fully utilize prompt similarity, our distributed caching mechanism also supports partial matching. As this approach introduces communication overhead associated with state sharing over a wireless network, we introduce a Bloom-filter-based data structure, referred to as a catalog, to determine whether a remote server possesses the desired internal states, thereby suppressing unnecessary communication. Experiments using the Gemma-3 270M model and the MMLU dataset on the Raspberry Pi Zero 2W platform demonstrate that the proposed approach reduces TTFT (Time to First Token) and TTLT (Time to Last Token) by 93.12% and 50.07% on average, respectively.
翻译:由于在资源受限的边缘设备上进行本地大语言模型推理会带来严重的性能瓶颈,本文提出分布式提示缓存技术,通过在多台低端边缘设备间协作共享中间处理状态来提升推理性能。为充分利用提示的相似性,我们的分布式缓存机制还支持部分匹配。由于该方法会引入无线网络状态共享相关的通信开销,我们引入了一种基于布隆过滤器的数据结构(称为目录),用于判断远程服务器是否拥有所需的内部状态,从而抑制不必要的通信。在树莓派 Zero 2W 平台上使用 Gemma-3 270M 模型和 MMLU 数据集进行的实验表明,所提方法平均分别将 TTFT(首词元生成时间)和 TTLT(尾词元生成时间)降低了 93.12% 和 50.07%。