This paper presents PRIMAL, a processing-in-memory (PIM) based large language model (LLM) inference accelerator with low-rank adaptation (LoRA). PRIMAL integrates heterogeneous PIM processing elements (PEs), interconnected by 2D-mesh inter-PE computational network (IPCN). A novel SRAM reprogramming and power gating (SRPG) scheme enables pipelined LoRA updates and sub-linear power scaling by overlapping reconfiguration with computation and gating idle resources. PRIMAL employs optimized spatial mapping and dataflow orchestration to minimize communication overhead, and achieves $1.5\times$ throughput and $25\times$ energy efficiency over NVIDIA H100 with LoRA rank 8 (Q,V) on Llama-13B.
翻译:本文提出PRIMAL,一种基于存内处理(PIM)并采用低秩自适应(LoRA)技术的大语言模型(LLM)推理加速器。PRIMAL集成了异构的PIM处理单元(PE),这些单元通过二维网格结构的处理单元间计算网络(IPCN)互连。一种新颖的SRAM重编程与电源门控(SRPG)方案,通过将重配置过程与计算重叠并对空闲资源进行门控,实现了流水线式的LoRA更新和亚线性功耗扩展。PRIMAL采用优化的空间映射与数据流编排以最小化通信开销,在Llama-13B模型上使用秩为8(Q, V)的LoRA时,相比NVIDIA H100实现了1.5倍的吞吐量提升和25倍的能效提升。