Neural networks are now deployed in a wide number of areas from object classification to natural language systems. Implementations using analog devices like memristors promise better power efficiency, potentially bringing these applications to a greater number of environments. However, such systems suffer from more frequent device faults and overall, their exposure to adversarial attacks has not been studied extensively. In this work, we investigate how nonideality-aware training - a common technique to deal with physical nonidealities - affects adversarial robustness. We find that adversarial robustness is significantly improved, even with limited knowledge of what nonidealities will be encountered during test time.
翻译:神经网络现已广泛应用于从物体分类到自然语言系统的众多领域。采用忆阻器等模拟器件实现的系统有望获得更高的能效,从而将这些应用推广至更广泛的环境。然而,此类系统存在更频繁的设备故障问题,且其对抗攻击的脆弱性尚未得到充分研究。本文探讨了非理想性感知训练——一种处理物理非理想性的常用技术——如何影响对抗鲁棒性。研究发现,即使对测试阶段可能遭遇的非理想性认知有限,对抗鲁棒性仍能得到显著提升。