With the increasing adoption of artificial intelligence (AI) technologies in the news industry, media organizations have begun publishing guidelines that aim to promote the responsible, ethical, and unbiased implementation of AI-based technologies. These guidelines are expected to serve journalists and media workers by establishing best practices and a framework that helps them navigate ever-evolving AI tools. Drawing on institutional theory and digital inequality concepts, this study analyzes 37 AI guidelines for media purposes in 17 countries. Our analysis reveals key thematic areas, such as transparency, accountability, fairness, privacy, and the preservation of journalistic values. Results highlight shared principles and best practices that emerge from these guidelines, including the importance of human oversight, explainability of AI systems, disclosure of automated content, and protection of user data. However, the geographical distribution of these guidelines, highlighting the dominance of Western nations, particularly North America and Europe, can further ongoing concerns about power asymmetries in AI adoption and consequently isomorphism outside these regions. Our results may serve as a resource for news organizations, policymakers, and stakeholders looking to navigate the complex AI development toward creating a more inclusive and equitable digital future for the media industry worldwide.
翻译:随着人工智能技术在新闻业中的日益普及,媒体组织开始发布旨在促进负责任、符合伦理且无偏见地实施基于人工智能技术的准则。这些准则旨在通过确立最佳实践和框架,帮助记者和媒体工作者应对不断演变的AI工具。本研究基于制度理论和数字不平等概念,分析了17个国家中37项针对媒体用途的AI准则。分析揭示了关键主题领域,如透明度、问责制、公平性、隐私及新闻价值的维护。结果凸显了这些准则中共同的原则和最佳实践,包括人工监督的重要性、AI系统的可解释性、自动化内容的披露以及用户数据的保护。然而,这些准则的地理分布突出了西方国家(尤其是北美和欧洲)的主导地位,这可能加剧人们对AI采用中权力不对称及其在这些地区以外导致的同构现象的持续关注。我们的成果可作为新闻组织、政策制定者和利益相关者的资源,帮助他们在复杂的AI发展中导航,为全球媒体行业打造一个更具包容性和公平性的数字未来。