In neural network (NN) security, safeguarding model integrity and resilience against adversarial attacks has become paramount. This study investigates the application of stochastic computing (SC) as a novel mechanism to fortify NN models. The primary objective is to assess the efficacy of SC to mitigate the deleterious impact of attacks on NN results. Through a series of rigorous experiments and evaluations, we explore the resilience of NNs employing SC when subjected to adversarial attacks. Our findings reveal that SC introduces a robust layer of defense, significantly reducing the susceptibility of networks to attack-induced alterations in their outcomes. This research contributes novel insights into the development of more secure and reliable NN systems, essential for applications in sensitive domains where data integrity is of utmost concern.
翻译:在神经网络(NN)安全领域,保护模型完整性并提升其抵御对抗性攻击的韧性已成为关键课题。本研究探讨了将随机计算(SC)作为一种新颖机制来加固神经网络模型的应用。主要目标是评估随机计算在减轻对抗性攻击对神经网络结果的有害影响方面的效能。通过一系列严格的实验与评估,我们探究了采用随机计算的神经网络在遭受对抗性攻击时的韧性。研究结果表明,随机计算引入了一层鲁棒的防御机制,显著降低了网络因攻击而导致输出结果被篡改的脆弱性。此项研究为开发更安全可靠的神经网络系统提供了新的见解,这对于数据完整性至关重要的敏感领域应用具有重要意义。