Analog in-memory computing (AIMC) has emerged as a promising solution to overcome the von Neumann bottleneck, accelerating neural network computations and improving computational efficiency. While AIMC has demonstrated success with architectures such as CNNs, MLPs, and RNNs, deploying transformer-based models using AIMC presents unique challenges. Transformers are expected to handle diverse downstream tasks and adapt to new user data or instructions after deployment, which requires more flexible approaches to suit AIMC constraints. In this paper, we propose a novel method for deploying pre-trained transformer models onto AIMC hardware. Unlike traditional approaches requiring hardware-aware training, our technique allows direct deployment without the need for retraining the original model. Instead, we utilize lightweight, low-rank adapters -- compact modules stored in digital cores -- to adapt the model to hardware constraints. We validate our approach on MobileBERT, demonstrating accuracy on par with, or even exceeding, a traditional hardware-aware training approach. Our method is particularly appealing in multi-task scenarios, as it enables a single analog model to be reused across multiple tasks. Moreover, it supports on-chip adaptation to new hardware constraints and tasks without updating analog weights, providing a flexible and versatile solution for real-world AI applications. Code is available.
翻译:模拟内存计算(AIMC)已成为克服冯·诺依曼瓶颈、加速神经网络计算并提升计算效率的一种有前景的解决方案。尽管AIMC已在CNN、MLP和RNN等架构上取得成功,但基于Transformer的模型在AIMC上的部署面临着独特的挑战。Transformer模型需处理多样化的下游任务,并在部署后适应新的用户数据或指令,这要求采用更灵活的方法来适应AIMC的约束。本文提出了一种将预训练Transformer模型部署到AIMC硬件上的新方法。与需要硬件感知训练的传统方法不同,我们的技术无需对原始模型进行重新训练即可直接部署。我们利用轻量级的低秩适配器——存储在数字核心中的紧凑模块——使模型适应硬件约束。我们在MobileBERT上验证了该方法,其精度与传统硬件感知训练方法相当甚至更优。我们的方法在多任务场景中尤其具有吸引力,因为它使得单个模拟模型能够在多个任务中重复使用。此外,该方法支持在无需更新模拟权重的情况下,在芯片上适应新的硬件约束和任务,为现实世界的人工智能应用提供了一个灵活且通用的解决方案。代码已开源。