As large language models (LLMs) demonstrate powerful capabilities, deploying them on edge devices has become increasingly crucial, offering advantages in privacy and real-time interaction. QLoRA has emerged as the standard approach for on-device LLMs, leveraging quantized models to reduce memory and computational costs while utilizing LoRA for task-specific adaptability. In this work, we propose ROMA, a QLoRA accelerator with a hybrid storage architecture that uses ROM for quantized base models and SRAM for LoRA weights and KV cache. Our insight is that the quantized base model is stable and converged, making it well-suited for ROM storage. Meanwhile, LoRA modules offer the flexibility to adapt to new data without requiring updates to the base model. To further reduce the area cost of ROM, we introduce a novel B-ROM design and integrate it with the compute unit to form a fused cell for efficient use of chip resources. ROMA can effectively store both a 4-bit 3B and a 2-bit 8B LLaMA model entirely on-chip, achieving a notable generation speed exceeding 20,000 tokens/s without requiring external memory.
翻译:随着大语言模型展现出强大的能力,将其部署在边缘设备上变得日益重要,这为隐私保护和实时交互带来了优势。QLoRA已成为片上大语言模型的标准方法,它利用量化模型降低内存和计算成本,同时借助LoRA实现针对特定任务的自适应能力。本文提出ROMA,一种采用混合存储架构的QLoRA加速器,它使用ROM存储量化基础模型,使用SRAM存储LoRA权重和KV缓存。我们的核心见解是,量化基础模型是稳定且收敛的,因此非常适合ROM存储。同时,LoRA模块能够灵活适应新数据,而无需更新基础模型。为了进一步降低ROM的面积成本,我们引入了一种新颖的B-ROM设计,并将其与计算单元集成,形成一个融合单元,以高效利用芯片资源。ROMA能够将完整的4位3B和2位8B LLaMA模型完全存储在片内,在不依赖外部内存的情况下,实现了超过20,000 tokens/s的显著生成速度。