We introduce TDFlow, a novel test-driven agentic workflow that frames repository-scale software engineering as a test-resolution task, specifically designed to solve human-written tests. Given a set of tests, TDFlow repeatedly proposes, revises, and debugs repository-scale patches using precisely engineered sub-agents and tightly constrained tools. The workflow decomposes software engineering program repair into four components governed by respective sub-agents. This simple, forced decoupling of patch proposing, debugging, patch revision, and optional test generation (1) reduces long-context burden on any individual sub-agent, (2) focuses each sub-agent on specific, pre-defined sub-tasks, and (3) allows for specialized performance improvement on specific sub-tasks. When provided human-written tests, TDFlow attains 88.8% pass rate on SWE-Bench Lite (an absolute improvement of 27.8% over the next best system) and 94.3% on SWE-Bench Verified. Manual inspection of the 800 TDFlow runs within SWE-Bench Lite and Verified uncover only 7 instances of test hacking, which were subsequently counted as failures. Furthermore, we show that the primary obstacle to human-level software engineering performance lies within writing successful reproduction tests. We envision a human-LLM interactive system powered by TDFlow where human developers write tests solved by LLM systems. Together, these results indicate that modern LLMs, when embedded in a narrowly engineered, test-driven workflow, already achieve human-level test resolution -- with the final frontier for fully autonomous repository repair being the accurate generation of valid reproduction tests.
翻译:本文介绍TDFlow,一种新颖的测试驱动智能体工作流,它将仓库级软件工程构建为测试解析任务,专门用于解决人工编写的测试用例。给定一组测试,TDFlow通过精心设计的子智能体与严格约束的工具,持续提出、修订和调试仓库级代码补丁。该工作流将软件工程程序修复分解为四个由相应子智能体管控的组件。这种对补丁提出、调试、补丁修订及可选测试生成的强制解耦设计具有以下优势:(1) 减轻单个子智能体的长上下文负担,(2) 使每个子智能体专注于特定的预定义子任务,(3) 支持针对特定子任务的专项性能提升。在采用人工编写测试的场景下,TDFlow在SWE-Bench Lite上达到88.8%的通过率(较次优系统绝对提升27.8%),在SWE-Bench Verified上达到94.3%的通过率。通过对SWE-Bench Lite与Verified中800次TDFlow运行结果的人工检查,仅发现7例测试篡改行为,这些案例均被判定为失败。进一步研究表明,实现人类级软件工程性能的主要障碍在于编写有效的复现测试。我们展望一种基于TDFlow的人机交互系统,由开发人员编写测试用例,再由LLM系统进行解析。综合结果表明,当现代LLM嵌入精心设计的测试驱动工作流时,已能实现人类级的测试解析能力——而实现完全自主仓库修复的最后壁垒在于准确生成有效的复现测试。