Tool-calling LLM agents can read private data, invoke external services, and trigger real-world actions, creating a security problem at the point of tool execution. We identify a denial-feedback leakage pattern, which we term causality laundering, in which an adversary probes a protected action, learns from the denial outcome, and exfiltrates the inferred information through a later seemingly benign tool call. This attack is not captured by flat provenance tracking alone because the leaked information arises from causal influence of the denied action, not direct data flow. We present the Agentic Reference Monitor (ARM), a runtime enforcement layer that mediates every tool invocation by consulting a provenance graph over tool calls, returned data, field-level provenance, and denied actions. ARM propagates trust through an integrity lattice and augments the graph with counterfactual edges from denied-action nodes, enabling enforcement over both transitive data dependencies and denial-induced causal influence. In a controlled evaluation on three representative attack scenarios, ARM blocks causality laundering, transitive taint propagation, and mixed-provenance field misuse that a flat provenance baseline misses, while adding sub-millisecond policy evaluation overhead. These results suggest that denial-aware causal provenance is a useful abstraction for securing tool-calling agent systems.
翻译:工具调用型大语言模型(LLM)代理可读取私有数据、调用外部服务并触发现实世界操作,在工具执行环节构成安全威胁。我们识别出一种拒绝反馈泄露模式,称为"因果清洗"(causality laundering):攻击者探测受保护操作,从拒绝结果中学习,再通过后续看似良性的工具调用外泄推断信息。该攻击无法通过扁平化溯源追踪单独捕获,因为泄露信息源自被拒绝操作的因果影响而非直接数据流。我们提出智能体参考监控器(Agentic Reference Monitor, ARM),这是一种运行时执行层,通过查询覆盖工具调用、返回数据、字段级溯源及被拒绝操作的溯源图来仲裁每次工具调用。ARM通过完整性格传播信任,并用来自被拒绝操作节点的反事实边增强溯源图,从而同时对传递式数据依赖和拒绝引发的因果影响实施约束。在三个典型攻击场景的受控评估中,ARM可阻断扁平化溯源基线无法检测的因果清洗、传递式污染传播及混合溯源字段滥用,同时仅增加亚毫秒级策略评估开销。实验结果表明,感知拒绝的因果溯源是保障工具调用型代理系统安全的有效抽象手段。