Modern artificial intelligence (AI) systems act with a high degree of independence yet lack legal personhood-a paradox that fractures doctrines grounded in human-centric notions of mens rea and actus reus. This Article introduces Operational Agency (OA)-a permeable legal fiction structured as an ex post evidentiary framework-and Operational Agency Graph (OAG), a tool for mapping causal interactions among human actors, organizations, and AI systems. OA evaluates an AI's observable operational characteristics: its goal-directedness (as a proxy for intent), predictive processing (as a proxy for foresight), and safety architecture (as a proxy for a standard of care). OAG operationalizes that analysis by embedding these characteristics in a causal graph to trace and apportion culpability among developers, fine-tuners, deployers, and users. Drawing on corporate criminal liability, the innocent-agent doctrine, and secondary and vicarious liability frameworks, the Article shows how OA and OAG strengthen existing doctrines. Across five real-world case studies spanning tort, civil rights, constitutional law, and antitrust, it demonstrates how the framework addresses challenges ranging from autonomous vehicle collisions to algorithmic price-fixing, offering courts a principled evidentiary method-and legislatures and industry a conceptual foundation-to ensure human accountability keeps pace with technological autonomy, without conferring personhood on AI.
翻译:现代人工智能系统以高度独立性运作,却缺乏法律人格——这一悖论瓦解了植根于人类中心主义犯意与犯罪行为概念的既有法理。本文提出操作性代理——一种构建为事后证据框架的可渗透法律拟制——及其配套工具操作性代理图,用于映射人类行为者、组织与人工智能系统间的因果交互网络。操作性代理通过评估人工智能系统的可观测操作特征实现责任追溯:其目标导向性(作为意图的代理指标)、预测处理能力(作为预见性的代理指标)以及安全架构(作为注意义务标准的代理指标)。操作性代理图通过将这些特征嵌入因果图模型,实现对开发者、微调者、部署者及使用者之间责任链条的追踪与分配。借鉴企业刑事责任、无辜代理人原则及次级与替代责任框架,本文论证了操作性代理与操作性代理图如何强化现有法理。通过横跨侵权法、民权法、宪法及反垄断法的五个现实案例研究,本文展示了该框架如何应对从自动驾驶汽车碰撞到算法价格操纵的系列挑战,为司法机构提供原则性证据方法,并为立法机构与产业界奠定概念基础,从而在避免赋予人工智能人格的前提下,确保人类责任机制与技术进步保持同步。