As large language models (LLMs) demonstrate powerful capabilities, deploying them on edge devices has become increasingly crucial, offering advantages in privacy and real-time interaction. QLoRA has emerged as the standard approach for on-device LLMs, leveraging quantized models to reduce memory and computational costs while utilizing LoRA for task-specific adaptability. In this work, we propose ROMA, a QLoRA accelerator with a hybrid storage architecture that uses ROM for quantized base models and SRAM for LoRA weights and KV cache. Our insight is that the quantized base model is stable and converged, making it well-suited for ROM storage. Meanwhile, LoRA modules offer the flexibility to adapt to new data without requiring updates to the base model. To further reduce the area cost of ROM, we introduce a novel B-ROM design and integrate it with the compute unit to form a fused cell for efficient use of chip resources. ROMA can effectively store both a 4-bit 3B and a 2-bit 8B LLaMA model entirely on-chip, achieving a notable generation speed exceeding 20,000 tokens/s without requiring external memory.
翻译:随着大语言模型(LLM)展现出强大的能力,将其部署在边缘设备上已变得日益重要,这为隐私保护和实时交互带来了优势。QLoRA已成为端侧大语言模型的标准方法,它利用量化模型降低内存和计算成本,同时借助LoRA实现面向特定任务的自适应能力。本文提出ROMA,一种采用混合存储架构的QLoRA加速器,其使用ROM存储量化基础模型,并使用SRAM存储LoRA权重及KV缓存。我们的核心洞察在于:量化后的基础模型是稳定且收敛的,因此非常适合存储在ROM中。与此同时,LoRA模块能够灵活适应新数据,而无需更新基础模型。为了进一步降低ROM的面积开销,我们提出了一种新颖的B-ROM设计,并将其与计算单元集成,形成融合单元以高效利用芯片资源。ROMA能够将完整的4位3B参数和2位8B参数的LLaMA模型有效存储在片内,在不依赖外部内存的情况下实现了超过20,000 tokens/s的显著生成速度。