The ongoing shortage of skilled developers, particularly in security-critical software development, has led organizations to increasingly adopt AI-powered development tools to boost productivity and reduce reliance on limited human expertise. These tools, often based on large language models, aim to automate routine tasks and make secure software development more accessible and efficient. However, it remains unclear how developers' general programming and security-specific experience, and the type of AI tool used (free vs. paid) affect the security of the resulting software. Therefore, we conducted a quantitative programming study with software developers (n=159) exploring the impact of Google's AI tool Gemini on code security. Participants were assigned a security-related programming task using either no AI tools, the free version, or the paid version of Gemini. While we did not observe significant differences between using Gemini in terms of secure software development, programming experience significantly improved code security and cannot be fully substituted by Gemini.
翻译:当前,熟练开发人员(尤其是安全关键型软件开发领域)的持续短缺,已促使各组织越来越多地采用基于人工智能的开发工具,以提高生产力并减少对有限人类专业知识的依赖。这些通常基于大语言模型的工具,旨在自动化常规任务,并使安全软件开发更具可及性和效率。然而,开发者的通用编程与安全特定经验,以及所用AI工具的类型(免费版与付费版)如何影响最终软件的安全性,目前尚不明确。为此,我们开展了一项针对软件开发人员(n=159)的定量编程研究,探讨谷歌AI工具Gemini对代码安全的影响。参与者被分配一项安全相关的编程任务,分别在不使用AI工具、使用Gemini免费版或付费版的情况下完成。虽然我们未观察到使用Gemini在安全软件开发方面存在显著差异,但编程经验显著提升了代码安全性,且无法被Gemini完全替代。