Language models have outpaced our ability to evaluate them effectively, but for their future development it is essential to study the frontier of their capabilities. We find real-world software engineering to be a rich, sustainable, and challenging testbed for evaluating the next generation of language models. To this end, we introduce SWE-bench, an evaluation framework consisting of $2,294$ software engineering problems drawn from real GitHub issues and corresponding pull requests across $12$ popular Python repositories. Given a codebase along with a description of an issue to be resolved, a language model is tasked with editing the codebase to address the issue. Resolving issues in SWE-bench frequently requires understanding and coordinating changes across multiple functions, classes, and even files simultaneously, calling for models to interact with execution environments, process extremely long contexts and perform complex reasoning that goes far beyond traditional code generation tasks. Our evaluations show that both state-of-the-art proprietary models and our fine-tuned model SWE-Llama can resolve only the simplest issues. The best-performing model, Claude 2, is able to solve a mere $1.96$% of the issues. Advances on SWE-bench represent steps towards LMs that are more practical, intelligent, and autonomous.
翻译:语言模型的发展速度已超越我们对其进行有效评估的能力,但为了其未来发展,研究其能力前沿至关重要。我们发现现实世界的软件工程是一个丰富、可持续且具有挑战性的测试平台,可用于评估下一代语言模型。为此,我们提出了SWE-bench,这是一个包含$2,294$个软件工程问题的评估框架,这些问题提取自$12$个热门Python代码库中的真实GitHub问题及对应的拉取请求。给定一个代码库以及待解决问题的描述,语言模型的任务是通过编辑代码库来解决该问题。解决SWE-bench中的问题通常需要同时理解并协调跨多个函数、类甚至文件的修改,这要求模型能够与执行环境交互、处理极长上下文并执行远超传统代码生成任务的复杂推理。我们的评估表明,无论是当前最先进的专有模型还是我们微调的模型SWE-Llama,都只能解决最简单的问题。性能最佳的模型Claude 2仅能解决$1.96$%的问题。在SWE-bench上的进展代表了语言模型向更实用、更智能、更自主方向迈出的步伐。