Traditional software relies on contracts -- APIs, type systems, assertions -- to specify and enforce correct behavior. AI agents, by contrast, operate on prompts and natural language instructions with no formal behavioral specification. This gap is the root cause of drift, governance failures, and frequent project failures in agentic AI deployments. We introduce Agent Behavioral Contracts (ABC), a formal framework that brings Design-by-Contract principles to autonomous AI agents. An ABC contract C = (P, I, G, R) specifies Preconditions, Invariants, Governance policies, and Recovery mechanisms as first-class, runtime-enforceable components. We define (p, delta, k)-satisfaction -- a probabilistic notion of contract compliance that accounts for LLM non-determinism and recovery -- and prove a Drift Bounds Theorem showing that contracts with recovery rate gamma > alpha (the natural drift rate) bound behavioral drift to D* = alpha/gamma in expectation, with Gaussian concentration in the stochastic setting. We establish sufficient conditions for safe contract composition in multi-agent chains and derive probabilistic degradation bounds. We implement ABC in AgentAssert, a runtime enforcement library, and evaluate on AgentContract-Bench, a benchmark of 200 scenarios across 7 models from 6 vendors. Results across 1,980 sessions show that contracted agents detect 5.2-6.8 soft violations per session that uncontracted baselines miss entirely (p < 0.0001, Cohen's d = 6.7-33.8), achieve 88-100% hard constraint compliance, and bound behavioral drift to D* < 0.27 across extended sessions, with 100% recovery for frontier models and 17-100% across all models, at overhead < 10 ms per action.
翻译:传统软件依赖契约——如API、类型系统、断言——来规约并强制实施正确行为。相比之下,AI智能体基于提示和自然语言指令运行,缺乏形式化的行为规约。这一差距是导致智能体AI部署中漂移、治理失效和项目频繁失败的根本原因。本文提出智能体行为契约(ABC),这是一个将契约式设计原则引入自主AI智能体的形式化框架。ABC契约C = (P, I, G, R)将前置条件、不变量、治理策略和恢复机制作为一等、可运行时强制实施的组件进行规约。我们定义了(p, δ, k)-满足度——一种考虑LLM非确定性和恢复机制的契约依从概率概念——并证明了漂移边界定理:当恢复率γ > α(自然漂移率)时,行为漂移在期望上被约束至D* = α/γ,在随机设定中呈现高斯集中性。我们为多智能体链中的安全契约组合建立了充分条件,并推导出概率性能退化边界。我们在AgentAssert运行时强制实施库中实现了ABC,并在AgentContract-Bench基准上进行了评估,该基准涵盖6家供应商的7个模型共200个场景。1980次会话的结果表明:契约化智能体每会话检测到5.2-6.8个非契约基线完全遗漏的软违规(p < 0.0001,Cohen's d = 6.7-33.8),实现88-100%的硬约束依从率,在扩展会话中将行为漂移约束至D* < 0.27,前沿模型恢复率达100%,所有模型恢复率在17-100%之间,单次动作开销<10毫秒。