Large language models (LLMs) have been widely applied but face challenges in efficient inference. While quantization methods reduce computational demands, ultra-low bit quantization with arbitrary precision is hindered by limited GPU Tensor Core support and inefficient memory management, leading to suboptimal acceleration. To address these challenges, we propose a comprehensive acceleration scheme for arbitrary precision LLMs. At its core, we introduce a novel bipolar-INT data format that facilitates parallel computing and supports symmetric quantization, effectively reducing data redundancy. Building on this, we implement an arbitrary precision matrix multiplication scheme that decomposes and recovers matrices at the bit level, enabling flexible precision while maximizing GPU Tensor Core utilization. Furthermore, we develop an efficient matrix preprocessing method that optimizes data layout for subsequent computations. Finally, we design a data recovery-oriented memory management system that strategically utilizes fast shared memory, significantly enhancing kernel execution speed and minimizing memory access latency. Experimental results demonstrate our approach's effectiveness, with up to 2.4\times speedup in matrix multiplication compared to NVIDIA's CUTLASS. When integrated into LLMs, we achieve up to 6.7\times inference acceleration. These improvements significantly enhance LLM inference efficiency, enabling broader and more responsive applications of LLMs.
翻译:大语言模型(LLM)已得到广泛应用,但在高效推理方面面临挑战。量化方法虽能降低计算需求,但受限于GPU张量核心对任意精度超低位量化的支持不足及低效的内存管理,导致加速效果欠佳。为应对这些挑战,我们提出了一种面向任意精度LLM的全面加速方案。其核心是引入一种新型双极INT数据格式,该格式既便于并行计算,又支持对称量化,能有效降低数据冗余。在此基础上,我们实现了一种任意精度矩阵乘法方案,该方案在比特级对矩阵进行分解与恢复,在实现灵活精度的同时,最大化GPU张量核心的利用率。此外,我们开发了一种高效的矩阵预处理方法,可为后续计算优化数据布局。最后,我们设计了一套面向数据恢复的内存管理系统,通过策略性地利用高速共享内存,显著提升了内核执行速度并最小化了内存访问延迟。实验结果表明,我们的方法在矩阵乘法运算中相比NVIDIA的CUTLASS最高可获得2.4倍的加速。当集成到LLM中时,我们实现了最高6.7倍的推理加速。这些改进显著提升了LLM的推理效率,使得LLM能够实现更广泛、响应更迅速的应用。