Rapid advancements of deep learning are accelerating adoption in a wide variety of applications, including safety-critical applications such as self-driving vehicles, drones, robots, and surveillance systems. These advancements include applying variations of sophisticated techniques that improve the performance of models. However, such models are not immune to adversarial manipulations, which can cause the system to misbehave and remain unnoticed by experts. The frequency of modifications to existing deep learning models necessitates thorough analysis to determine the impact on models' robustness. In this work, we present an experimental evaluation of the effects of model modifications on deep learning model robustness using adversarial attacks. Our methodology involves examining the robustness of variations of models against various adversarial attacks. By conducting our experiments, we aim to shed light on the critical issue of maintaining the reliability and safety of deep learning models in safety- and security-critical applications. Our results indicate the pressing demand for an in-depth assessment of the effects of model changes on the robustness of models.
翻译:深度学习的快速发展正加速其在广泛应用中的采纳,包括安全关键型应用,如自动驾驶车辆、无人机、机器人和监控系统。这些进展涉及应用各种复杂技术的变体以提升模型性能。然而,此类模型并非免疫于对抗性操纵,这可能使系统出现异常行为且不被专家察觉。对现有深度学习模型频繁进行的修改要求深入分析其对模型鲁棒性的影响。本研究通过对抗性攻击,对模型修改如何影响深度学习模型鲁棒性进行了实验评估。我们的方法涉及检验模型变体在面对多种对抗性攻击时的鲁棒性。通过实验,我们旨在阐明在安全与安全关键型应用中维护深度学习模型可靠性与安全性的关键问题。我们的结果表明,迫切需要深入评估模型变更对鲁棒性的影响。