Artificial Intelligence's dual-use nature is revolutionizing the cybersecurity landscape, introducing new threats across four main categories: deepfakes and synthetic media, adversarial AI attacks, automated malware, and AI-powered social engineering. This paper aims to analyze emerging risks, attack mechanisms, and defense shortcomings related to AI in cybersecurity. We introduce a comparative taxonomy connecting AI capabilities with threat modalities and defenses, review over 70 academic and industry references, and identify impactful opportunities for research, such as hybrid detection pipelines and benchmarking frameworks. The paper is structured thematically by threat type, with each section addressing technical context, real-world incidents, legal frameworks, and countermeasures. Our findings emphasize the urgency for explainable, interdisciplinary, and regulatory-compliant AI defense systems to maintain trust and security in digital ecosystems.
翻译:人工智能的双重用途特性正在彻底改变网络安全格局,在四大主要领域引入了新的威胁:深度伪造与合成媒体、对抗性人工智能攻击、自动化恶意软件以及人工智能驱动的社会工程。本文旨在分析与网络安全中人工智能相关的新兴风险、攻击机制及防御缺陷。我们提出了一个将人工智能能力与威胁模式及防御措施相连接的比较性分类体系,回顾了超过70篇学术与行业参考文献,并识别出具有影响力的研究机遇,例如混合检测流程和基准测试框架。本文按威胁类型进行主题式结构组织,每个部分均涵盖技术背景、真实世界事件、法律框架及应对措施。我们的研究结果强调了开发可解释、跨学科且符合监管要求的人工智能防御系统的紧迫性,以维护数字生态系统中的信任与安全。