Six-bit quantization (FP6) can effectively reduce the size of large language models (LLMs) and preserve the model quality consistently across varied applications. However, existing systems do not provide Tensor Core support for FP6 quantization and struggle to achieve practical performance improvements during LLM inference. It is challenging to support FP6 quantization on GPUs due to (1) unfriendly memory access of model weights with irregular bit-width and (2) high runtime overhead of weight de-quantization. To address these problems, we propose TC-FPx, the first full-stack GPU kernel design scheme with unified Tensor Core support of float-point weights for various quantization bit-width. We integrate TC-FPx kernel into an existing inference system, providing new end-to-end support (called FP6-LLM) for quantized LLM inference, where better trade-offs between inference cost and model quality are achieved. Experiments show that FP6-LLM enables the inference of LLaMA-70b using only a single GPU, achieving 1.69x-2.65x higher normalized inference throughput than the FP16 baseline. The source code is publicly available at https://github.com/usyd-fsalab/fp6_llm.
翻译:六位量化(FP6)能有效压缩大语言模型(LLM)规模,并在各类应用中持续保持模型质量。然而,现有系统未提供对FP6量化的张量核心支持,导致LLM推理过程中难以实现实际性能提升。在GPU上支持FP6量化面临两大挑战:(1)非规则位宽模型权重的内存访问不友好;(2)权重反量化的运行时开销过高。针对上述问题,我们提出TC-FPx——首个支持多量化位宽浮点权重张量核心统一加速的全栈式GPU内核设计方案。我们将TC-FPx内核集成至现有推理系统,为量化LLM推理提供全新端到端支持(称为FP6-LLM),在推理成本与模型质量之间实现更优平衡。实验表明,FP6-LLM仅需单块GPU即可完成LLaMA-70b推理,其归一化推理吞吐量较FP16基线提升1.69倍至2.65倍。源代码已公开于https://github.com/usyd-fsalab/fp6_llm。