Data transparency has emerged as a rallying cry for addressing concerns about AI: data quality, privacy, and copyright chief among them. Yet while these calls are crucial for accountability, current transparency policies often fall short of their intended aims. Similar to nutrition facts for food, policies aimed at nutrition facts for AI currently suffer from a limited consideration of research on effective disclosures. We offer an institutional perspective and identify three common fallacies in policy implementations of data disclosures for AI. First, many data transparency proposals exhibit a specification gap between the stated goals of data transparency and the actual disclosures necessary to achieve such goals. Second, reform attempts exhibit an enforcement gap between required disclosures on paper and enforcement to ensure compliance in fact. Third, policy proposals manifest an impact gap between disclosed information and meaningful changes in developer practices and public understanding. Informed by the social science on transparency, our analysis identifies affirmative paths for transparency that are effective rather than merely symbolic.
翻译:数据透明已成为应对人工智能关切(尤其是数据质量、隐私和版权问题)的普遍诉求。尽管这些呼吁对问责制至关重要,但现行透明政策往往未能实现其预期目标。与食品营养标签类似,当前针对人工智能"营养标签"的政策制定,普遍缺乏对有效披露研究成果的充分考量。本文提出制度性分析视角,识别出人工智能数据披露政策实施中的三种常见谬误。其一,众多数据透明提案存在"规范缺口"——数据透明的既定目标与实现这些目标所需实际披露内容之间存在脱节。其二,改革尝试存在"执行缺口"——纸面要求的披露规定与确保实际合规的执法机制之间存在断层。其三,政策提案显现"影响缺口"——已披露信息与开发者实践及公众认知的有意义改变之间缺乏有效连接。基于社会科学对透明度的研究成果,本文分析指出了从象征性走向实效性的透明政策建设路径。