The key-value (KV) cache in large language models presents a significant memory bottleneck during inference, growing linearly with sequence length and often exceeding the memory footprint of model weights themselves. We implement and evaluate GPU-accelerated INT8 quantization for KV cache compression, achieving 4$\times$ memory reduction with minimal accuracy degradation. We develop four CUDA kernel variants -- naive, tiled, coarsened, and vectorized -- and benchmark them across realistic workload sizes up to 1 billion elements. Our vectorized kernel achieves up to 1,694$\times$ speedup over CPU baselines while maintaining reconstruction error below 0.004 and attention score error below 0.1 even for 8K-dimensional heads. These results demonstrate that INT8 quantization provides a practical approach for reducing memory pressure in LLM inference with negligible computational overhead (6--58ms) and minimal impact on downstream model behavior
翻译:大型语言模型中的键值(KV)缓存已成为推理过程中的显著内存瓶颈,其容量随序列长度线性增长,并常超过模型权重本身的内存占用。我们实现并评估了基于GPU加速的INT8量化KV缓存压缩方案,在精度损失最小的前提下实现了4倍内存压缩。我们开发了四种CUDA内核变体——基础版、分块版、粗粒度版和向量化版,并在高达10亿元素的实际工作负载规模上进行了基准测试。我们的向量化内核相比CPU基线实现了最高1,694倍的加速比,同时即使面对8K维注意力头,其重构误差仍低于0.004,注意力分数误差低于0.1。这些结果表明,INT8量化为缓解LLM推理内存压力提供了实用方案,其计算开销可忽略不计(6-58毫秒),且对下游模型行为的影响微乎其微。