The rapid integration of generative AI into academic writing has prompted widespread policy responses from journals and publishers. However, the effectiveness of these policies remains unclear. Here, we analyze 5,114 journals and over 5.2 million papers to evaluate the real-world impact of AI usage guidelines. We show that despite 70% of journals adopting AI policies (primarily requiring disclosure), researchers' use of AI writing tools has increased dramatically across disciplines, with no significant difference between journals with or without policies. Non-English-speaking countries, physical sciences, and high-OA journals exhibit the highest growth rates. Crucially, full-text analysis on 164k scientific publications reveals a striking transparency gap: Of the 75k papers published since 2023, only 76 (~0.1%) explicitly disclosed AI use. Our findings suggest that current policies have largely failed to promote transparency or restrain AI adoption. We urge a re-evaluation of ethical frameworks to foster responsible AI integration in science.
翻译:生成式人工智能在学术写作中的快速融合已促使期刊和出版商广泛出台政策应对。然而,这些政策的有效性尚不明确。本文通过分析5,114种期刊及超过520万篇论文,以评估人工智能使用指南在现实世界中的影响。研究表明,尽管70%的期刊采纳了人工智能政策(主要要求进行披露),但研究人员对人工智能写作工具的使用在各学科领域均急剧增加,且无论期刊是否设有相关政策,其使用率均无显著差异。非英语国家、物理科学领域以及高开放获取(OA)期刊显示出最高的增长率。关键的是,对16.4万篇科学出版物的全文分析揭示了一个显著的透明度差距:在2023年以来发表的7.5万篇论文中,仅有约76篇(约0.1%)明确披露了人工智能的使用。我们的研究结果表明,现行政策在很大程度上未能促进透明度或抑制人工智能的采用。我们呼吁重新评估伦理框架,以促进人工智能在科学领域的负责任融合。