The rapid adoption of generative artificial intelligence (AI) in scientific research, particularly large language models (LLMs), has outpaced the development of ethical guidelines, leading to a Triple-Too problem: too many high-level ethical initiatives, too abstract principles lacking contextual and practical relevance, and too much focus on restrictions and risks over benefits and utilities. Existing approaches, including principlism (reliance on abstract ethical principles), formalism (rigid application of rules), and technical solutionism (overemphasis on technological fixes), offer little practical guidance for addressing ethical challenges of AI in scientific research practices. To bridge the gap between abstract principles and day-to-day research practices, a user-centered, realism-inspired approach is proposed here. It outlines five specific goals for ethical AI use: 1) understanding model training and output, including bias mitigation strategies; 2) respecting privacy, confidentiality, and copyright; 3) avoiding plagiarism and policy violations; 4) applying AI beneficially compared to alternatives; and 5) using AI transparently and reproducibly. Each goal is accompanied by actionable strategies and realistic cases of misuse and corrective measures. I argue that ethical AI application requires evaluating its utility against existing alternatives rather than isolated performance metrics. Additionally, I propose documentation guidelines to enhance transparency and reproducibility in AI-assisted research. Moving forward, we need targeted professional development, training programs, and balanced enforcement mechanisms to promote responsible AI use while fostering innovation. By refining these ethical guidelines and adapting them to emerging AI capabilities, we can accelerate scientific progress without compromising research integrity.
翻译:生成式人工智能(AI)在科学研究中的迅速普及,特别是大语言模型(LLMs)的应用,已超越了伦理准则的发展速度,导致出现“三重过度”问题:过多的高层伦理倡议、过于抽象且缺乏情境与实践相关性的原则,以及过度关注限制与风险而忽视效益与实用性。现有方法,包括原则主义(依赖抽象伦理原则)、形式主义(僵化应用规则)和技术解决主义(过度强调技术性解决方案),对于解决科学研究实践中AI带来的伦理挑战几乎无法提供实际指导。为弥合抽象原则与日常研究实践之间的鸿沟,本文提出一种以用户为中心、受现实主义启发的路径。它概述了伦理AI应用的五个具体目标:1)理解模型训练与输出,包括偏见缓解策略;2)尊重隐私、保密与版权;3)避免剽窃与政策违规;4)相较于替代方案,有益地应用AI;5)透明且可复现地使用AI。每个目标均配有可操作的策略以及误用的现实案例与纠正措施。我认为,伦理AI应用需要评估其相对于现有替代方案的效用,而非孤立的性能指标。此外,我提出了旨在提升AI辅助研究中透明度与可复现性的文档记录指南。展望未来,我们需要有针对性的专业发展、培训计划以及平衡的监督机制,以促进负责任的AI使用,同时鼓励创新。通过完善这些伦理准则并使其适应新兴的AI能力,我们可以在不损害研究诚信的前提下加速科学进步。