Processing-in-memory (PIM) is a promising computing paradigm to tackle the "memory wall" challenge. However, PIM system-level benefits over traditional von Neumann architecture can be reduced when the memory array cannot fully store all the neural network (NN) weights. The NN size is increasing while the PIM design size cannot scale up accordingly due to area constraints. Therefore, this work targets the system performance optimization and exploration for compact PIM designs. We first analyze the impact of data movement on compact designs. Then, we propose a novel pipeline method that maximizes the reuse of NN weights to improve the throughput and energy efficiency of inference in compact chips. To further boost throughput, we introduce a scheduling algorithm to mitigate the pipeline bubble problem. Moreover, we investigate the trade-off between the network size and system performance for a compact PIM chip. Experimental results show that the proposed algorithm achieves 2.35x and 0.5% improvement in throughput and energy efficiency, respectively. Compared to the area-unlimited design, our compact chip achieves approximately 56.5% of the throughput and 58.6% of the energy efficiency while using only one-third of the chip area, along with 1.3x improvement in area efficiency. Our compact design also outperforms the modern GPU with 4.56x higher throughput and 157x better energy efficiency. Besides, our compact design uses less than 20% of the system energy for data movement as batch size scales up.
翻译:存内计算(PIM)是一种应对“内存墙”挑战的极具前景的计算范式。然而,当内存阵列无法完全存储所有神经网络(NN)权重时,PIM相较于传统冯·诺依曼架构的系统级优势便会减弱。神经网络规模日益增大,而受限于面积约束,PIM的设计尺寸无法相应扩展。因此,本文致力于针对紧凑型PIM设计进行系统性能优化与探索。我们首先分析了数据移动对紧凑设计的影响。随后,提出了一种新颖的流水线方法,该方法通过最大化神经网络权重的复用来提升紧凑芯片中推理任务的吞吐量与能效。为进一步提升吞吐量,我们引入了一种调度算法以缓解流水线气泡问题。此外,我们还研究了紧凑型PIM芯片中网络规模与系统性能之间的权衡关系。实验结果表明,所提算法在吞吐量和能效上分别实现了2.35倍的提升和0.5%的改进。与面积不受限的设计相比,我们的紧凑型芯片仅使用三分之一的芯片面积,即可实现约56.5%的吞吐量和58.6%的能效,同时面积效率提升了1.3倍。我们的紧凑设计也优于现代GPU,吞吐量高出4.56倍,能效优出157倍。此外,随着批处理规模增大,我们的紧凑设计用于数据移动的系统能耗占比不到20%。