Large Language Models (LLMs) have shown surprising proficiency in generating code snippets, promising to automate large parts of software engineering via artificial intelligence (AI). We argue that successfully deploying AI software engineers requires a level of trust equal to or even greater than the trust established by human-driven software engineering practices. The recent trend toward LLM agents offers a path toward integrating the power of LLMs to create new code with the power of analysis tools to increase trust in the code. This opinion piece comments on whether LLM agents could dominate software engineering workflows in the future and whether the focus of programming will shift from programming at scale to programming with trust.
翻译:大型语言模型(LLM)在生成代码片段方面展现出惊人的能力,有望通过人工智能(AI)实现软件工程大部分环节的自动化。我们认为,成功部署AI软件工程师需要建立与人类驱动的软件工程实践同等甚至更高的信任水平。当前LLM智能体的发展趋势为整合LLM生成新代码的能力与分析工具提升代码可信度的能力提供了可行路径。本文探讨了LLM智能体未来是否可能主导软件工程工作流,以及编程重点是否会从规模化编程转向基于信任的编程。