Thanks to advances in large language models, a new type of software agent, the artificial intelligence (AI) agent, has entered the marketplace. Companies such as OpenAI, Google, Microsoft, and Salesforce promise their AI Agents will go from generating passive text to executing tasks. Instead of a travel itinerary, an AI Agent would book all aspects of your trip. Instead of generating text or images for social media post, an AI Agent would post the content across a host of social media outlets. The potential power of AI Agents has fueled legal scholars' fears that AI Agents will enable rogue commerce, human manipulation, rampant defamation, and intellectual property harms. These scholars are calling for regulation before AI Agents cause havoc. This Article addresses the concerns around AI Agents head on. It shows that core aspects of how one piece of software interacts with another creates ways to discipline AI Agents so that rogue, undesired actions are unlikely, perhaps more so than rules designed to govern human agents. It also develops a way to leverage the computer-science approach to value-alignment to improve a user's ability to take action to prevent or correct AI Agent operations. That approach offers and added benefit of helping AI Agents align with norms around user-AI Agent interactions. These practices will enable desired economic outcomes and mitigate perceived risks. The Article also argues that no matter how much AI Agents seem like human agents, they need not, and should not, be given legal personhood status. In short, humans are responsible for AI Agents' actions, and this Article provides a guide for how humans can build and maintain responsible AI Agents.
翻译:得益于大型语言模型的进步,一种新型软件代理——人工智能(AI)代理——已进入市场。OpenAI、Google、Microsoft和Salesforce等公司承诺,其AI代理将从生成被动文本转向执行任务。例如,AI代理将为你预订旅行的所有环节,而不仅仅是生成旅行计划;它将内容发布到众多社交媒体平台,而不仅仅是生成社交媒体帖子的文本或图像。AI代理的潜在能力加剧了法学学者的担忧,他们担心AI代理将助长非法商业活动、操纵人类、泛滥的诽谤行为以及知识产权侵害。这些学者呼吁在AI代理造成严重破坏之前对其进行监管。本文直面围绕AI代理的担忧。文章指出,软件之间交互的核心特性为约束AI代理创造了途径,使得越轨和不良行为不太可能发生,其效果或许比设计用于规范人类代理的规则更为显著。本文还提出了一种方法,利用计算机科学中的价值对齐思路来增强用户预防或纠正AI代理操作的能力。该方法还具有额外优势,即帮助AI代理在与用户交互时符合相关规范。这些实践将促进理想的经济成果并降低可预见的风险。本文同时主张,无论AI代理与人类代理多么相似,都不应也不该被赋予法律人格地位。简而言之,人类应对AI代理的行为负责,而本文为人类如何构建和维护负责任的AI代理提供了指南。