Computing-in-Memory (CIM) macros have gained popularity for deep learning acceleration due to their highly parallel computation and low power consumption. However, limited macro size and ADC precision introduce throughput and accuracy bottlenecks. This paper proposes a two-stage CIM-aware model adaptation process. The first stage compresses the model and reallocates resources based on layer importance and macro size constraints, reducing model weight loading latency while improving resource utilization and maintaining accuracy. The second stage performs quantization-aware training, incorporating partial sum quantization and ADC precision to mitigate quantization errors in inference. The proposed approach enhances CIM array utilization to 90\%, enables concurrent activation of up to 256 word lines, and achieves up to 93\% compression, all while preserving accuracy comparable to previous methods.
翻译:存内计算(CIM)宏单元因其高度并行的计算能力和低功耗特性,在深度学习加速领域日益受到关注。然而,有限的宏单元尺寸和模数转换器(ADC)精度带来了吞吐量和准确性的瓶颈。本文提出了一种两阶段的CIM感知模型适配流程。第一阶段根据层的重要性和宏单元尺寸约束对模型进行压缩和资源重分配,在保持精度的同时减少模型权重加载延迟并提高资源利用率。第二阶段执行量化感知训练,引入部分和量化与ADC精度,以减轻推理过程中的量化误差。所提方法将CIM阵列利用率提升至90%,支持最多256条字线同时激活,并实现高达93%的压缩率,同时保持了与现有方法相当的精度。