Recent innovations in artificial intelligence (AI), primarily powered by large language models (LLMs), have transformed how programmers develop and maintain software -- leading to new frontiers in software engineering (SE). The advanced capabilities of LLM-based programming assistants to support software development tasks have led to a rise in the adoption of LLMs in SE. However, little is known about the evidenced-based practices, tools and processes verified by research findings, supported and adopted by AI programming assistants. To this end, our work conducts a preliminary evaluation exploring the beliefs and behaviors of LLM used to support software development tasks. We investigate 17 evidence-based claims posited by empirical SE research across five LLM-based programming assistants. Our findings show that LLM-based programming assistants have ambiguous beliefs regarding research claims, lack credible evidence to support responses, and are incapable of adopting practices demonstrated by empirical SE research to support development tasks. Based on our results, we provide implications for practitioners adopting LLM-based programming assistants in development contexts and shed light on future research directions to enhance the reliability and trustworthiness of LLMs -- aiming to increase awareness and adoption of evidence-based SE research findings in practice.
翻译:近年来,人工智能(AI)领域的创新——主要依托于大语言模型(LLMs)——已经改变了程序员开发与维护软件的方式,从而开辟了软件工程(SE)的新前沿。基于LLM的编程助手在支持软件开发任务方面展现出的先进能力,推动了LLMs在软件工程中的广泛应用。然而,对于AI编程助手所支持与采纳的、经过研究成果验证的循证实践、工具和流程,目前知之甚少。为此,本研究开展了一项初步评估,旨在探索用于支持软件开发任务的LLM所持有的信念与行为模式。我们选取了五款基于LLM的编程助手,对软件工程实证研究提出的17项循证主张进行了调查。我们的研究结果表明:基于LLM的编程助手对于研究主张持有模糊的信念;缺乏可信的证据来支撑其回答;并且无法采纳软件工程实证研究所论证的实践来支持开发任务。基于这些结果,我们为在开发环境中采用基于LLM的编程助手的实践者提供了启示,并展望了未来研究方向,以增强LLMs的可靠性与可信度——其目标在于提升实践中对软件工程循证研究成果的认知与采纳。