Deep learning (DL) has significantly transformed cybersecurity, enabling advancements in malware detection, botnet identification, intrusion detection, user authentication, and encrypted traffic analysis. However, the rise of adversarial examples (AE) poses a critical challenge to the robustness and reliability of DL-based systems. These subtle, crafted perturbations can deceive models, leading to severe consequences like misclassification and system vulnerabilities. This paper provides a comprehensive review of the impact of AE attacks on key cybersecurity applications, highlighting both their theoretical and practical implications. We systematically examine the methods used to generate adversarial examples, their specific effects across various domains, and the inherent trade-offs attackers face between efficacy and resource efficiency. Additionally, we explore recent advancements in defense mechanisms, including gradient masking, adversarial training, and detection techniques, evaluating their potential to enhance model resilience. By summarizing cutting-edge research, this study aims to bridge the gap between adversarial research and practical security applications, offering insights to fortify the adoption of DL solutions in cybersecurity.
翻译:深度学习(DL)已显著变革了网络安全领域,推动了恶意软件检测、僵尸网络识别、入侵检测、用户认证与加密流量分析等方面的进步。然而,对抗样本(AE)的兴起对基于DL系统的鲁棒性与可靠性构成了严峻挑战。这些精心设计的细微扰动能够欺骗模型,导致误分类与系统漏洞等严重后果。本文全面综述了AE攻击对关键网络安全应用的影响,阐明了其理论与实际意义。我们系统性地探讨了生成对抗样本的方法、其在各领域的具体效应,以及攻击者在攻击效果与资源效率之间面临的内在权衡。此外,本文研究了包括梯度掩蔽、对抗训练与检测技术在内的最新防御机制进展,评估了其提升模型抗扰能力的潜力。通过总结前沿研究成果,本研究旨在弥合对抗性研究与实际安全应用之间的鸿沟,为强化DL解决方案在网络安全中的部署提供见解。