Electronic-photonic computing systems have emerged as a promising platform for accelerating deep neural network (DNN) workloads. Major efforts have been focused on countering hardware non-idealities and boosting efficiency with various hardware/algorithm co-design methods. However, the adversarial robustness of such photonic analog mixed-signal AI hardware remains unexplored. Though the hardware variations can be mitigated with robustness-driven optimization methods, malicious attacks on the hardware show distinct behaviors from noises, which requires a customized protection method tailored to optical analog hardware. In this work, we rethink the role of conventionally undesired non-idealities in photonic analog accelerators and claim their surprising effects on defending against adversarial weight attacks. Inspired by the protection effects from DNN quantization and pruning, we propose a synergistic defense framework tailored for optical analog hardware that proactively protects sensitive weights via pre-attack unary weight encoding and post-attack vulnerability-aware weight locking. Efficiency-reliability trade-offs are formulated as constrained optimization problems and efficiently solved offline without model re-training costs. Extensive evaluation of various DNN benchmarks with a multi-core photonic accelerator shows that our framework maintains near-ideal on-chip inference accuracy under adversarial bit-flip attacks with merely <3% memory overhead. Our codes are open-sourced at https://github.com/ScopeX-ASU/Unlikely_Hero.
翻译:电子-光子计算系统已成为加速深度神经网络工作负载的有前景平台。当前主要研究致力于通过硬件/算法协同设计方法抑制硬件非理想性并提升效率。然而,此类光子模拟混合信号AI硬件的对抗鲁棒性尚未得到探索。虽然硬件变异可通过鲁棒性驱动的优化方法缓解,但针对硬件的恶意攻击表现出与噪声截然不同的行为特征,需要针对光学模拟硬件定制防护方案。本研究重新审视了光子模拟加速器中传统上不受欢迎的非理想性作用,并揭示了其在防御对抗性权重攻击方面的惊人效果。受DNN量化和剪枝防护效应的启发,我们提出专为光学模拟硬件设计的协同防御框架,通过攻击前的单热权重编码与攻击后的漏洞感知权重锁定,主动保护敏感权重。效率与可靠性的权衡被建模为约束优化问题,并可在无需模型重训练成本的条件下离线高效求解。基于多核光子加速器对多种DNN基准测试的广泛评估表明,本框架在对抗性比特翻转攻击下仅以<3%的内存开销即可维持接近理想的片上推理精度。代码已开源:https://github.com/ScopeX-ASU/Unlikely_Hero。