Over the past decade, policymakers have developed a set of regulatory tools to ensure AI development aligns with key societal goals. Many of these tools were initially developed in response to concerns with task-specific AI and therefore encode certain assumptions about the nature of AI systems and the utility of certain regulatory approaches. With the advent of general-purpose AI (GPAI), however, some of these assumptions no longer hold, even as policymakers attempt to maintain a single regulatory target that covers both types of AI. In this paper, we identify four distinct aspects of GPAI that call for meaningfully different policy responses. These are the generality and adaptability of GPAI that make it a poor regulatory target, the difficulty of designing effective evaluations, new legal concerns that change the ecosystem of stakeholders and sources of expertise, and the distributed structure of the GPAI value chain. In light of these distinctions, policymakers will need to evaluate where the past decade of policy work remains relevant and where new policies, designed to address the unique risks posed by GPAI, are necessary. We outline three recommendations for policymakers to more effectively identify regulatory targets and leverage constraints across the broader ecosystem to govern GPAI.
翻译:过去十年间,政策制定者已形成一套监管工具以确保人工智能发展符合关键社会目标。其中许多工具最初是为应对任务特定型人工智能的隐忧而制定,因而内嵌了关于人工智能系统本质及特定监管方法效用的若干假设。然而,随着通用人工智能的出现,即使政策制定者试图维持覆盖两类人工智能的单一监管目标,其中部分假设已不再成立。本文指出通用人工智能在四个维度上需要差异化的政策应对:其通用性与适应性导致其难以成为有效的监管对象、评估体系设计的复杂性、改变利益相关方生态与专业知识来源的新型法律问题,以及通用人工智能价值链的分布式结构。基于这些差异,政策制定者需审慎评估既往十年政策成果的适用边界,并针对通用人工智能特有的风险设计全新政策框架。我们为此提出三项建议:帮助政策制定者更精准地定位监管目标,并通过调动更广泛生态系统的约束机制来有效治理通用人工智能。