Traditional digital implementations of neural accelerators are limited by high power and area overheads, while analog and non-CMOS implementations suffer from noise, device mismatch, and reliability issues. This paper introduces a CMOS Look-Up Table (LUT)-based Neural Accelerator (LUT-NA) framework that reduces the power, latency, and area consumption of traditional digital accelerators through pre-computed, faster look-ups while avoiding noise and mismatch of analog circuits. To solve the scalability issues of conventional LUT-based computation, we split the high-precision multiply and accumulate (MAC) operations into lower-precision MACs using a divide-and-conquer-based approach. We show that LUT-NA achieves up to $29.54\times$ lower area with $3.34\times$ lower energy per inference task than traditional LUT-based techniques and up to $1.23\times$ lower area with $1.80\times$ lower energy per inference task than conventional digital MAC-based techniques (Wallace Tree/Array Multipliers) without retraining and without affecting accuracy, even on lottery ticket pruned (LTP) models that already reduce the number of required MAC operations by up to 98%. Finally, we introduce mixed precision analysis in LUT-NA framework for various LTP models (VGG11, VGG19, Resnet18, Resnet34, GoogleNet) that achieved up to $32.22\times$-$50.95\times$ lower area across models with $3.68\times$-$6.25\times$ lower energy per inference than traditional LUT-based techniques, and up to $1.35\times$-$2.14\times$ lower area requirement with $1.99\times$-$3.38\times$ lower energy per inference across models as compared to conventional digital MAC-based techniques with $\sim$1% accuracy loss.
翻译:传统数字神经网络加速器受限于高功耗与面积开销,而模拟及非CMOS实现方案存在噪声、器件失配与可靠性问题。本文提出基于CMOS查找表(LUT)的神经加速器框架(LUT-NA),通过预计算快速查表降低传统数字加速器的功耗、延迟与面积消耗,同时规避模拟电路的噪声与失配问题。为解决传统LUT计算的可扩展性瓶颈,我们采用分治策略将高精度乘累加(MAC)运算分解为低精度MAC操作。研究表明:LUT-NA相比传统LUT技术可实现推理任务面积降低29.54倍、能耗降低3.34倍;相比传统数字MAC方案(Wallace Tree/Array乘法器),在无需重新训练且不影响精度的条件下,即使对已减少98%MAC操作的彩票剪枝(LTP)模型,仍可实现面积降低1.23倍、能耗降低1.80倍。最终,我们针对多种LTP模型(VGG11、VGG19、Resnet18、Resnet34、GoogleNet)在LUT-NA框架中引入混合精度分析,在约1%精度损失下,相比传统LUT技术实现跨模型面积降低32.22-50.95倍、推理能耗降低3.68-6.25倍;相比传统数字MAC方案,跨模型面积需求降低1.35-2.14倍、推理能耗降低1.99-3.38倍。