The EU AI Act adopts a horizontal and adaptive approach to govern AI technologies characterised by rapid development and unpredictable emerging capabilities. To maintain relevance, the Act embeds provisions for regulatory learning. However, these provisions operate within a complex network of actors and mechanisms that lack a clearly defined technical basis for scalable information flow. This paper addresses this gap by establishing a theoretical model of regulatory learning space defined by the AI Act, decomposed into micro, meso, and macro levels. Drawing from this functional perspective of this model, we situate the diverse stakeholders - ranging from the EU Commission at the macro level to AI developers at the micro level - within the transitions of enforcement (macro-micro) and evidence aggregation (micro-macro). We identify AI Technical Sandboxes as the essential engine for evidence generation at the micro level, providing the necessary data to drive scalable learning across all levels of the model. By providing an extensive discussion of the requirements and challenges for AITSes to serve as this micro-level evidence generator, we aim to bridge the gap between legislative commands and technical operationalisation, thereby enabling a structured discourse between technical and legal experts.
翻译:欧盟《人工智能法案》采用横向适应性方法治理人工智能技术,这些技术具有发展迅速和新兴能力不可预测的特点。为保持法规的时效性,该法案嵌入了监管学习条款。然而,这些条款在复杂的行动者网络和机制中运作,缺乏可扩展信息流的明确定义技术基础。本文通过建立《人工智能法案》界定的监管学习空间理论模型来填补这一空白,该模型可分解为微观、中观和宏观三个层级。基于该模型的功能视角,我们将从宏观层面的欧盟委员会到微观层面的人工智能开发者等多元利益相关者,置于执法传导(宏观-微观)与证据聚合(微观-宏观)的过渡框架中。我们识别出人工智能技术沙盒是微观层面证据生成的核心引擎,其提供的数据可驱动模型所有层级的可扩展学习。通过对人工智能技术沙盒作为微观证据生成器所需满足的要求及面临挑战的深入探讨,本文旨在弥合立法指令与技术实施之间的鸿沟,从而构建技术专家与法律专家间的结构化对话机制。