Online platforms are seeing increasing amounts of AI-generated content -- text and other forms of media that are made or co-created with generative AI. This trend suggests platforms may need to establish governance frameworks, including policies and enforcement strategies for how users create, post, share, and engage with such content to encourage responsible use. We investigate the governance of AI-generated content across 40 popular social media platforms. Just over two-thirds explicitly describe governance of AI-generated content spanning six themes. Most platforms focus on moderating AI-generated content that violates established content rules and discloses AI-generated content. Fewer platforms -- those that are focused on creativity and knowledge-sharing -- address other issues such as ownership and monetization. Based on these findings, we suggest stakeholders and policymakers develop more direct, comprehensive, and forward-looking AI-generated content governance, as well as tools and education for users about the use of such content.
翻译:在线平台正面临日益增多的AI生成内容——即通过生成式AI制作或共同创作的文本及其他形式媒体。这一趋势表明平台可能需要建立治理框架,包括针对用户如何创建、发布、分享及互动此类内容的政策与执行策略,以促进负责任使用。本研究调查了40个主流社交媒体平台对AI生成内容的治理现状。略超三分之二的平台明确阐述了涵盖六大主题的AI生成内容治理方案。多数平台侧重于审核违反现有内容规则的AI生成内容及要求披露AI生成内容属性。较少平台(主要集中在创意与知识共享领域)涉及所有权和货币化等其他议题。基于这些发现,我们建议利益相关方和政策制定者建立更直接、全面且具前瞻性的AI生成内容治理体系,并为用户提供相关工具使用教育。