In recent years, the rise of AI-assisted code-generation tools has significantly transformed software development. While code generators have mainly been used to support conventional software development, their use will be extended to powerful and secure AI systems. Systems capable of generating code, such as ChatGPT, OpenAI Codex, GitHub Copilot, and AlphaCode, take advantage of advances in machine learning (ML) and natural language processing (NLP) enabled by large language models (LLMs). However, it must be borne in mind that these models work probabilistically, which means that although they can generate complex code from natural language input, there is no guarantee for the functionality and security of the generated code. However, to fully exploit the considerable potential of this technology, the security, reliability, functionality, and quality of the generated code must be guaranteed. This paper examines the implementation of these goals to date and explores strategies to optimize them. In addition, we explore how these systems can be optimized to create safe, high-performance, and executable artificial intelligence (AI) models, and consider how to improve their accessibility to make AI development more inclusive and equitable.
翻译:近年来,AI辅助代码生成工具的兴起显著改变了软件开发。虽然代码生成器主要用于支持传统软件开发,但其应用将扩展到强大且安全的AI系统。能够生成代码的系统,如ChatGPT、OpenAI Codex、GitHub Copilot和AlphaCode,利用了由大语言模型(LLMs)实现的机器学习(ML)和自然语言处理(NLP)的进步。然而,必须牢记这些模型以概率方式工作,这意味着尽管它们可以根据自然语言输入生成复杂代码,但无法保证生成代码的功能性和安全性。然而,为了充分发挥该技术的巨大潜力,必须保证生成代码的安全性、可靠性、功能性和质量。本文考察了迄今为止这些目标的实现情况,并探讨了优化策略。此外,我们探索了如何优化这些系统以创建安全、高性能且可执行的人工智能(AI)模型,并思考如何提高其可访问性,使AI开发更具包容性和公平性。