Large Language Models (LLMs) for code are rapidly evolving, with code editing emerging as a critical capability. We introduce CodeEditorBench, an evaluation framework designed to rigorously assess the performance of LLMs in code editing tasks, including debugging, translating, polishing, and requirement switching. Unlike existing benchmarks focusing solely on code generation, CodeEditorBench emphasizes real-world scenarios and practical aspects of software development. We curate diverse coding challenges and scenarios from five sources, covering various programming languages, complexity levels, and editing tasks. Evaluation of 19 LLMs reveals that closed-source models (particularly Gemini-Ultra and GPT-4), outperform open-source models in CodeEditorBench, highlighting differences in model performance based on problem types and prompt sensitivities. CodeEditorBench aims to catalyze advancements in LLMs by providing a robust platform for assessing code editing capabilities. We will release all prompts and datasets to enable the community to expand the dataset and benchmark emerging LLMs. By introducing CodeEditorBench, we contribute to the advancement of LLMs in code editing and provide a valuable resource for researchers and practitioners.
翻译:大型语言模型(LLMs)在代码领域发展迅速,其中代码编辑正成为一项关键能力。我们提出了CodeEditorBench,一个旨在严格评估LLMs在代码编辑任务中性能的评估框架,这些任务包括调试、翻译、优化和需求切换。与仅关注代码生成的现有基准不同,CodeEditorBench强调软件开发的真实场景和实际应用。我们从五个来源精心筛选了多样化的编程挑战和场景,涵盖多种编程语言、复杂度级别和编辑任务。对19个LLMs的评估显示,闭源模型(特别是Gemini-Ultra与GPT-4)在CodeEditorBench上优于开源模型,并揭示了基于问题类型和提示敏感性的模型性能差异。CodeEditorBench旨在通过提供评估代码编辑能力的稳健平台,推动LLMs的进步。我们将发布所有提示和数据集,以便社区扩展数据集并对新兴LLMs进行基准测试。通过引入CodeEditorBench,我们为LLMs在代码编辑领域的发展做出了贡献,并为研究人员和从业者提供了宝贵资源。