Agentic AI marks an important transition from single-step generative models to systems capable of reasoning, planning, acting, and adapting over long-lasting tasks. By integrating memory, tool use, and iterative decision cycles, these systems enable continuous, autonomous workflows in real-world environments. This survey examines the implications of agentic AI for cybersecurity. On the defensive side, agentic capabilities enable continuous monitoring, autonomous incident response, adaptive threat hunting, and fraud detection at scale. Conversely, the same properties amplify adversarial power by accelerating reconnaissance, exploitation, coordination, and social-engineering attacks. These dual-use dynamics expose fundamental gaps in existing governance, assurance, and accountability mechanisms, which were largely designed for non-autonomous and short-lived AI systems. To address these challenges, we survey emerging threat models, security frameworks, and evaluation pipelines tailored to agentic systems, and analyze systemic risks including agent collusion, cascading failures, oversight evasion, and memory poisoning. Finally, we present three representative use-case implementations that illustrate how agentic AI behaves in practical cybersecurity workflows, and how design choices shape reliability, safety, and operational effectiveness.
翻译:智能体人工智能标志着从单步生成模型向能够推理、规划、行动并适应长期任务的系统的重要转变。通过整合记忆、工具使用和迭代决策循环,这些系统能够在现实环境中实现持续、自主的工作流。本综述探讨了智能体人工智能对网络安全的影响。在防御层面,智能体能力支持持续监控、自主事件响应、自适应威胁狩猎和大规模欺诈检测。反之,相同特性也通过加速侦察、漏洞利用、协同攻击和社会工程攻击,增强了攻击方的能力。这种双重用途特性暴露了现有治理、保障和问责机制的根本性缺陷,这些机制主要针对非自主和短生命周期的AI系统设计。为应对这些挑战,我们综述了针对智能体系统的新兴威胁模型、安全框架和评估流程,并分析了包括智能体共谋、级联故障、监督规避和记忆污染在内的系统性风险。最后,我们展示了三个代表性用例实现,以说明智能体AI在实际网络安全工作流中的行为模式,以及设计选择如何影响其可靠性、安全性和操作有效性。