Modern artificial intelligence (AI) systems act with a high degree of independence yet lack legal personhood-a paradox that fractures doctrines grounded in human-centric notions of mens rea and actus reus. This Article introduces Operational Agency (OA)-a permeable legal fiction structured as an ex post evidentiary framework-and Operational Agency Graph (OAG), a tool for mapping causal interactions among human actors, organizations, and AI systems. OA evaluates an AI's observable operational characteristics: its goal-directedness (as a proxy for intent), predictive processing (as a proxy for foresight), and safety architecture (as a proxy for a standard of care). OAG operationalizes that analysis by embedding these characteristics in a causal graph to trace and apportion culpability among developers, fine-tuners, deployers, and users. Drawing on corporate criminal liability, the innocent-agent doctrine, and secondary and vicarious liability frameworks, the Article shows how OA and OAG strengthen existing doctrines. Across five real-world case studies spanning tort, civil rights, constitutional law, and antitrust, it demonstrates how the framework addresses challenges ranging from autonomous vehicle collisions to algorithmic price-fixing, offering courts a principled evidentiary method-and legislatures and industry a conceptual foundation-to ensure human accountability keeps pace with technological autonomy, without conferring personhood on AI.
翻译:现代人工智能系统以高度独立性运行,却缺乏法律人格——这一悖论瓦解了植根于以人为中心的犯意与行为要件的法律原则。本文提出操作性代理——一种构建为事后证据框架的可渗透法律拟制——以及操作性代理图,一种用于映射人类行为者、组织与人工智能系统间因果互动的工具。操作性代理评估人工智能可观测的操作特性:其目标导向性、预测处理能力以及安全架构。操作性代理图通过将这些特性嵌入因果图以实现该分析,从而追踪并分配开发者、微调者、部署者及用户之间的罪责。借鉴公司刑事责任、无辜代理人原则以及次要与替代责任框架,本文阐释了操作性代理与操作性代理图如何强化现有法律原则。通过涵盖侵权法、民权法、宪法及反垄断法五个现实世界案例研究,本文展示了该框架如何应对从自动驾驶汽车碰撞到算法价格操纵等一系列挑战,为法院提供一种原则性的证据方法,并为立法机构与行业界提供一个概念基础,以确保人类问责制与技术进步同步,而无需赋予人工智能法律人格。