There is strong agreement that generative AI should be regulated, but strong disagreement on how to approach regulation. While some argue that AI regulation should mostly rely on extensions of existing laws, others argue that entirely new laws and regulations are needed to ensure that generative AI benefits society. In this paper, I argue that the debates on generative AI regulation can be informed by the debates and evidence on social media regulation. For example, AI companies have faced allegations of political bias regarding the images and text their models produce, similar to the allegations social media companies have faced regarding content ranking on their platforms. First, I compare and contrast the affordances of generative AI and social media to highlight their similarities and differences. Then, I discuss specific policy recommendations based on the evolution of social media and their regulation. These recommendations include investments in: efforts to counter bias and perceptions thereof (e.g., via transparency, researcher access, oversight boards, democratic input, research studies), specific areas of regulatory concern (e.g., youth wellbeing, election integrity) and trust and safety, computational social science research, and a more global perspective. Applying lessons learnt from social media regulation to generative AI regulation can save effort and time, and prevent avoidable mistakes.
翻译:关于生成式人工智能应当受到监管已形成广泛共识,但在具体监管路径上仍存在显著分歧。部分观点主张人工智能监管应主要依托现有法律体系的延伸,而另一些观点则认为需要建立全新的法律法规以确保生成式人工智能造福社会。本文认为,社交媒体监管领域的争论与实践证据可为生成式人工智能的监管讨论提供重要参考。例如,人工智能公司因其模型生成的图像和文本面临政治偏见指控,这与社交媒体公司因其平台内容排序机制受到的指控具有相似性。首先,本文通过比较生成式人工智能与社交媒体的技术可供性,系统阐释两者的共性与差异。继而基于社交媒体及其监管机制的发展历程,提出具体政策建议。这些建议包括在以下领域加大投入:消除偏见及其认知影响的举措(例如通过透明度提升、研究者数据访问、监督委员会、民主化参与及专项研究)、特定监管关切领域(例如青少年福祉、选举公正性)与信任安全体系、计算社会科学研究,以及更具全球视野的监管框架。将社交媒体监管的经验教训应用于生成式人工智能监管,能够有效节约资源与时间,并避免重蹈覆辙。