Regulators currently govern the AI data economy based on intuition rather than evidence, struggling to choose between inconsistent regimes of informed consent, immunity, and liability. To fill this policy vacuum, this paper develops a novel computational policy laboratory: a spatially explicit Agent-Based Model (ABM) of the data market. To solve the problem of missing data, we introduce a two-stage methodological pipeline. First, we translate decision rules from multi-year fieldwork (2022-2025) into agent constraints. This ensures the model reflects actual bargaining frictions rather than theoretical abstractions. Second, we deploy Large Language Models (LLMs) as "subjects" in a Discrete Choice Experiment (DCE). This novel approach recovers precise preference primitives, such as willingness-to-pay elasticities, which are empirically unobservable in the wild. Calibrated by these inputs, our model places rival legal institutions side-by-side to simulate their welfare effects. The results challenge the dominant regulatory paradigm. We find that property-rule mechanisms, such as informed consent, fail to maximize welfare. Counterintuitively, social welfare peaks when liability for substantive harm is shifted to the downstream buyer. This aligns with the "least cost avoider" principle, because downstream users control post-acquisition safeguards, they are best positioned to mitigate risk efficiently. By "de-romanticizing" seller-centric frameworks, this paper provides an economic justification for emerging doctrines of downstream reachability.
翻译:当前监管者基于直觉而非证据来治理人工智能数据经济,在相互矛盾的信息同意、豁免与责任制度之间艰难抉择。为填补这一政策真空,本文开发了一个新颖的计算政策实验室:一个空间显式的数据市场多主体模型(ABM)。为解决数据缺失问题,我们引入了一个两阶段方法流程。首先,我们将多年实地调研(2022-2025年)中的决策规则转化为智能体约束条件,确保模型反映真实的议价摩擦而非理论抽象。其次,我们部署大型语言模型(LLMs)作为离散选择实验(DCE)中的“受试者”。这种创新方法能够还原精确的偏好原语(例如支付意愿弹性),这些参数在现实环境中是经验上无法观测的。通过这些输入进行校准后,我们的模型将相互竞争的法律制度并置以模拟其福利效应。研究结果挑战了主流监管范式:我们发现财产规则机制(如信息同意)未能实现福利最大化。反直觉的是,当实质性损害的责任转移至下游购买方时,社会福利达到峰值。这与“最低成本避免者”原则相契合——由于下游用户控制着数据获取后的安全措施,他们处于最有利的位置来高效降低风险。通过“去浪漫化”以卖方为中心的框架,本文为新兴的下游可及性学说提供了经济学依据。