A key challenge for Deep Neural Network (DNN) algorithms is their vulnerability to adversarial attacks. Inherently non-deterministic compute substrates, such as those based on Analog In-Memory Computing (AIMC), have been speculated to provide significant adversarial robustness when performing DNN inference. In this paper, we experimentally validate this conjecture for the first time on an AIMC chip based on Phase Change Memory (PCM) devices. We demonstrate higher adversarial robustness against different types of adversarial attacks when implementing an image classification network. Additional robustness is also observed when performing hardware-in-the-loop attacks, for which the attacker is assumed to have full access to the hardware. A careful study of the various noise sources indicate that a combination of stochastic noise sources (both recurrent and non-recurrent) are responsible for the adversarial robustness and that their type and magnitude disproportionately effects this property. Finally, it is demonstrated, via simulations, that when a much larger transformer network is used to implement a Natural Language Processing (NLP) task, additional robustness is still observed.
翻译:深度神经网络算法面临的一个关键挑战是其易受对抗攻击的脆弱性。基于模拟内存计算等固有非确定性计算基质的系统,在执行DNN推理时被推测能提供显著的对抗鲁棒性。本文首次在基于相变存储器器件的AIMC芯片上通过实验验证了这一猜想。我们在实现图像分类网络时,针对不同类型的对抗攻击展示了更高的对抗鲁棒性。在执行硬件在环攻击时(假设攻击者拥有对硬件的完全访问权限),同样观察到额外的鲁棒性。对各种噪声源的细致研究表明,随机噪声源(包括循环与非循环类型)的组合是产生对抗鲁棒性的原因,且其类型与强度对该特性具有非对称性影响。最后通过仿真证明,当使用规模更大的Transformer网络执行自然语言处理任务时,仍能观测到额外的鲁棒性。