Expanding Deep Learning applications toward edge computing demands architectures capable of delivering high computational performance and efficiency while adhering to tight power and memory constraints. Digital In-Memory Computing (DIMC) addresses this need by moving part of the computation directly within memory arrays, significantly reducing data movement and improving energy efficiency. This paper introduces a novel architecture that extends the Vector RISC-V Instruction Set Architecture (ISA) to integrate a tightly coupled DIMC unit directly into the execution stage of the pipeline, to accelerate Deep Learning inference at the edge. Specifically, the proposed approach adds four custom instructions dedicated to data loading, computation, and write-back, enabling flexible and optimal control of the inference execution on the target architecture. Experimental results demonstrate high utilization of the DIMC tile in Vector RISC-V and sustained throughput across the ResNet-50 model, achieving a peak performance of 137 GOP/s. The proposed architecture achieves a speedup of 217x over the baseline core and 50x area-normalized speedup even when operating near the hardware resource limits. The experimental results confirm the high potential of the proposed architecture as a scalable and efficient solution to accelerate Deep Learning inference on the edge.
翻译:深度学习应用向边缘计算的扩展需要能够提供高计算性能和效率的架构,同时满足严格的功耗和内存限制。数字内存计算(DIMC)通过将部分计算直接移至内存阵列内部来应对这一需求,显著减少了数据移动并提高了能效。本文介绍了一种新颖的架构,该架构扩展了向量RISC-V指令集架构(ISA),将紧密耦合的DIMC单元直接集成到流水线的执行阶段,以加速边缘端的深度学习推理。具体而言,所提出的方法增加了四条专用于数据加载、计算和写回的自定义指令,从而能够在目标架构上灵活且最优地控制推理执行。实验结果表明,在向量RISC-V中DIMC计算单元得到了高效利用,并在ResNet-50模型上实现了持续的吞吐量,峰值性能达到137 GOP/s。所提出的架构相较于基准核心实现了217倍的加速,即使在接近硬件资源极限的情况下运行,也实现了50倍面积归一化加速。实验结果证实了所提出架构作为加速边缘端深度学习推理的可扩展高效解决方案的巨大潜力。