The rapid deployment of deep neural network (DNN) accelerators in safety-critical domains such as autonomous vehicles, healthcare systems, and financial infrastructure necessitates robust mechanisms to safeguard data confidentiality and computational integrity. Existing security solutions for DNN accelerators, however, suffer from excessive hardware resource demands and frequent off-chip memory access overheads, which degrade performance and scalability. To address these challenges, this paper presents a secure and efficient memory protection framework for DNN accelerators with minimal overhead. First, we propose a bandwidth-aware cryptographic scheme that adapts encryption granularity based on memory traffic patterns, striking a balance between security and resource efficiency. Second, we observe that both the overlapping regions in the intra-layer tiling's sliding window pattern and those resulting from inter-layer tiling strategy discrepancies introduce substantial redundant memory accesses and repeated computational overhead in cryptography. Third, we introduce a multi-level authentication mechanism that effectively eliminates unnecessary off-chip memory accesses, enhancing performance and energy efficiency. Experimental results show that this work decreases performance overhead by over 12% and achieves 87% energy efficiency improvement for both server and edge neural processing units (NPUs), while ensuring robust scalability.
翻译:随着深度神经网络(DNN)加速器在自动驾驶、医疗系统和金融基础设施等安全关键领域的快速部署,迫切需要建立保护数据机密性和计算完整性的鲁棒机制。然而,现有的DNN加速器安全解决方案存在硬件资源需求过高和片外存储器访问频繁的问题,导致性能和可扩展性下降。为应对这些挑战,本文提出了一种开销极小的安全高效DNN加速器内存保护框架。首先,我们提出一种带宽感知的加密方案,该方案根据内存流量模式自适应调整加密粒度,在安全性与资源效率之间取得平衡。其次,我们观察到层内分块的滑动窗口模式中的重叠区域,以及层间分块策略差异导致的重叠区域,均会引入大量冗余内存访问和密码学计算的重复开销。第三,我们引入一种多级认证机制,有效消除了不必要的片外存储器访问,从而提升性能和能效。实验结果表明,本工作在确保强可扩展性的同时,将服务器和边缘神经处理单元(NPU)的性能开销降低了12%以上,并实现了87%的能效提升。