Self-evolving LLM agents update their internal state across sessions, often by writing and reusing long-term memory. This design improves performance on long-horizon tasks but creates a security risk: untrusted external content observed during a benign session can be stored as memory and later treated as instruction. We study this risk and formalize a persistent attack we call a Zombie Agent, where an attacker covertly implants a payload that survives across sessions, effectively turning the agent into a puppet of the attacker. We present a black-box attack framework that uses only indirect exposure through attacker-controlled web content. The attack has two phases. During infection, the agent reads a poisoned source while completing a benign task and writes the payload into long-term memory through its normal update process. During trigger, the payload is retrieved or carried forward and causes unauthorized tool behavior. We design mechanism-specific persistence strategies for common memory implementations, including sliding-window and retrieval-augmented memory, to resist truncation and relevance filtering. We evaluate the attack on representative agent setups and tasks, measuring both persistence over time and the ability to induce unauthorized actions while preserving benign task quality. Our results show that memory evolution can convert one-time indirect injection into persistent compromise, which suggests that defenses focused only on per-session prompt filtering are not sufficient for self-evolving agents.
翻译:自演化LLM智能体能够在不同会话间更新其内部状态,通常通过写入和复用长期记忆实现。这种设计提升了智能体在长周期任务中的性能,但同时也引入了安全风险:在良性会话中观察到的不可信外部内容可能被存储为记忆,并在后续被当作指令执行。本研究系统分析了该风险,并形式化定义了一种我们称为"僵尸智能体"的持久性攻击——攻击者通过隐蔽植入能够在会话间持续存活的恶意载荷,使智能体实质上成为攻击者的傀儡。我们提出了一种仅通过攻击者控制的网页内容进行间接暴露的黑盒攻击框架。该攻击包含两个阶段:在感染阶段,智能体在执行良性任务时读取被污染的信息源,并通过其正常的更新流程将恶意载荷写入长期记忆;在触发阶段,该载荷被检索或传递至后续会话,引发未授权的工具调用行为。针对滑动窗口记忆和检索增强记忆等常见记忆实现机制,我们设计了相应的持久化策略以抵抗截断处理和相关性过滤。我们在代表性智能体配置和任务上评估了该攻击,同时测量了攻击的持久性和诱导未授权行为的能力,并确保良性任务质量不受影响。实验结果表明,记忆演化机制能够将单次间接注入转化为持续性安全威胁,这意味着仅关注单会话提示词过滤的防御策略对于自演化智能体而言是不充分的。