Large language models (LLMs) and LLM-based Agents have been applied to fix bugs automatically, demonstrating the capability in addressing software defects by engaging in development environment interaction, iterative validation and code modification. However, systematic analysis of these agent systems remain limited, particularly regarding performance variations among top-performing ones. In this paper, we examine six repair systems on the SWE-bench Verified benchmark for automated bug fixing. We first assess each system's overall performance, noting the instances solvable by all or none of these systems, and explore the capabilities of different systems. We also compare fault localization accuracy at file and code symbol levels and evaluate bug reproduction capabilities. Through analysis, we concluded that further optimization is needed in both the LLM capability itself and the design of Agentic flow to improve the effectiveness of the Agent in bug fixing.
翻译:大语言模型(LLMs)及其驱动的智能代理已被应用于自动修复软件缺陷,通过与环境交互、迭代验证及代码修改,展现了处理软件问题的潜力。然而,针对这些代理系统的系统性分析仍然有限,尤其是在表现优异的系统之间的性能差异方面。本文在SWE-bench Verified基准测试上考察了六种自动化缺陷修复系统。我们首先评估了各系统的整体性能,记录了所有系统均能解决或均无法解决的案例,并探究了不同系统的能力特点。此外,我们还比较了它们在文件级和代码符号级的缺陷定位精度,并评估了其缺陷复现能力。通过分析,我们认为,要提升智能代理在缺陷修复中的效能,不仅需要进一步优化大语言模型本身的能力,还需改进其代理流程的设计。