Large Language Models (LLMs) have demonstrated exceptional coding capability. However, as another critical component of programming proficiency, the debugging capability of LLMs remains relatively unexplored. Previous evaluations of LLMs' debugging ability are significantly limited by the risk of data leakage, the scale of the dataset, and the variety of tested bugs. To overcome these deficiencies, we introduce `DebugBench', an LLM debugging benchmark consisting of 4,253 instances. It covers four major bug categories and 18 minor types in C++, Java, and Python. To construct DebugBench, we collect code snippets from the LeetCode community, implant bugs into source data with GPT-4, and assure rigorous quality checks. We evaluate two commercial and four open-source models in a zero-shot scenario. We find that (1) while closed-source models exhibit inferior debugging performance compared to humans, open-source models relatively lower pass rate scores; (2) the complexity of debugging notably fluctuates depending on the bug category; (3) incorporating runtime feedback has a clear impact on debugging performance which is not always helpful. As an extension, we also compare LLM debugging and code generation, revealing a strong correlation between them for closed-source models. These findings will benefit the development of LLMs in debugging.
翻译:大型语言模型(LLM)已展现出卓越的编码能力。然而,作为编程熟练度的另一关键组成部分,LLM的调试能力仍相对未被充分探索。以往对LLM调试能力的评估在数据泄露风险、数据集规模以及测试错误类型的多样性方面存在显著局限。为克服这些不足,我们提出了"DebugBench"——一个包含4,253个实例的LLM调试基准测试。该基准涵盖C++、Java和Python中的四大错误类别及18种细分错误类型。为构建DebugBench,我们从LeetCode社区收集代码片段,利用GPT-4向源数据植入错误,并确保严格的质量检查。我们在零样本场景下评估了两种商业模型和四种开源模型。我们发现:(1)闭源模型的调试性能虽逊于人类,但开源模型的通过率分数相对更低;(2)调试复杂度因错误类别不同而显著波动;(3)纳入运行时反馈对调试性能有明显影响,但并非总是有益。作为延伸,我们还比较了LLM的调试与代码生成能力,揭示出闭源模型中两者之间存在强相关性。这些发现将有助于LLM在调试领域的发展。