Deep neural network (DNN) accelerators employing crossbar arrays capable of in-memory computing (IMC) are highly promising for neural computing platforms. However, in deeply scaled technologies, interconnect resistance severely impairs IMC robustness, leading to a drop in the system accuracy. To address this problem, we propose SWANN - a technique based on shuffling weights in crossbar arrays which alleviates the detrimental effect of wire resistance on IMC. For 8T-SRAM-based 128x128 crossbar arrays in 7nm technology, SWANN enhances the accuracy from 47.78% to 83.5% for ResNet-20/CIFAR-10. We also show that SWANN can be used synergistically with Partial-Word-LineActivation, further boosting the accuracy. Moreover, we evaluate the implications of SWANN for compact ferroelectric-transistorbased crossbar arrays. SWANN incurs minimal hardware overhead, with less than a 1% increase in energy consumption. Additionally, the latency and area overheads of SWANN are ~1% and ~16%, respectively when 1 ADC is utilized per crossbar array.
翻译:采用具备内存计算能力的交叉阵列的深度神经网络加速器在神经计算平台中极具前景。然而,在深度缩放技术中,互连电阻严重损害内存计算的鲁棒性,导致系统精度下降。为解决该问题,我们提出SWANN——一种基于交叉阵列权重重排的技术,可减轻导线电阻对内存计算的不利影响。对于7纳米技术中基于8T-SRAM的128x128交叉阵列,SWANN将ResNet-20/CIFAR-10的精度从47.78%提升至83.5%。我们还证明SWANN可与部分字线激活技术协同使用,进一步提升精度。此外,我们评估了SWANN在紧凑型铁电晶体管交叉阵列中的应用效果。SWANN产生的硬件开销极小,能耗增加低于1%。当每个交叉阵列使用1个模数转换器时,SWANN的延迟与面积开销分别约为1%和16%。