The rapid advancement of generative AI has introduced a new class of tools capable of producing publication-quality scientific figures, graphical abstracts, and data visualizations. However, academic publishers have responded with inconsistent and often ambiguous policies regarding AI-generated imagery. This paper surveys the current stance of major journals and publishers -- including Nature, Science, Cell Press, Elsevier, and PLOS -- on the use of AI-generated figures. We identify key concerns raised by publishers, including reproducibility, authorship attribution, and potential for visual misinformation. Drawing on practical examples from tools such as SciDraw, an AI-powered platform designed specifically for scientific illustration, we propose a set of best-practice guidelines for researchers seeking to use AI figure-generation tools in a compliant and transparent manner. Our findings suggest that, with appropriate disclosure and quality control, AI-generated figures can meaningfully accelerate scientific communication without compromising integrity.
翻译:生成式人工智能的快速发展催生了一类能够制作达到出版质量水平的科学图表、图文摘要及数据可视化图像的新型工具。然而,学术出版机构对此的反应是出台了不一致且往往模糊不清的人工智能生成图像政策。本文调研了包括《自然》《科学》《细胞》出版社、爱思唯尔和PLOS在内的主要期刊和出版商当前对使用人工智能生成图像的立场。我们梳理了出版商提出的主要关切点,包括可重复性、作者署名归属以及视觉误导的可能性。借助专为科学插图设计的人工智能平台SciDraw等工具的实际案例,我们为研究人员提出了一套最佳实践指南,旨在以合规且透明的方式使用人工智能图像生成工具。我们的研究结果表明,在适当的披露和质量控制下,人工智能生成的图像能够在不损害学术诚信的前提下,有效加速科学传播。