Software development agents powered by large language models (LLMs) have shown great promise in automating tasks like environment setup, issue solving, and program repair. Unfortunately, understanding and debugging such agents remain challenging due to their complex and dynamic nature. Developers must reason about trajectories of LLM queries, tool calls, and code modifications, but current techniques reveal little of this intermediate process in a comprehensible format. The key insight of this paper is that debugging software development agents shares many similarities with conventional debugging of software programs, yet requires a higher level of abstraction that raises the level from low-level implementation details to high-level agent actions. Drawing on this insight, we introduce AgentStepper, the first interactive debugger for LLM-based software engineering agents. AgentStepper enables developers to inspect, control, and interactively manipulate agent trajectories. AgentStepper represents trajectories as structured conversations among an LLM, the agent program, and tools. It supports breakpoints, stepwise execution, and live editing of prompts and tool invocations, while capturing and displaying intermediate repository-level code changes. Our evaluation applies AgentStepper to three state-of-the-art software development agents, ExecutionAgent, SWE-Agent, and RepairAgent, showing that integrating the approach into existing agents requires minor code changes (39-42 edited lines). Moreover, we report on a user study with twelve participants, indicating that AgentStepper improves the ability of participants to interpret trajectories (64% vs. 67% mean performance) and identify bugs in the agent's implementation (17% vs. 60% success rate), while reducing perceived workload (e.g., frustration reduced from 5.4/7.0 to 2.4/7.0) compared to conventional tools.
翻译:基于大语言模型(LLM)的软件开发智能体在自动化环境搭建、问题解决和程序修复等任务中展现出巨大潜力。然而,由于其复杂且动态的特性,理解和调试此类智能体仍然具有挑战性。开发者需要推理LLM查询、工具调用和代码修改的轨迹序列,但现有技术难以以可理解的格式呈现这一中间过程。本文的核心洞见在于:调试软件开发智能体与传统软件程序调试存在诸多相似之处,但需要更高层次的抽象,将关注点从底层实现细节提升至高层智能体行为。基于这一洞见,我们提出了AgentStepper——首个面向基于LLM的软件工程智能体的交互式调试器。AgentStepper使开发者能够检查、控制并交互式地操控智能体轨迹。该系统将轨迹结构化为LLM、智能体程序与工具之间的结构化对话序列,支持断点设置、单步执行、提示词与工具调用的实时编辑,同时捕获并展示仓库层级的中间代码变更。我们在三种前沿软件开发智能体(ExecutionAgent、SWE-Agent与RepairAgent)上评估了AgentStepper,结果表明将其集成到现有智能体仅需少量代码修改(39-42行)。此外,我们开展了一项包含12名参与者的用户研究,数据显示相较于传统工具,AgentStepper能提升参与者解读轨迹的能力(平均表现64% vs. 67%)、识别智能体实现中缺陷的成功率(17% vs. 60%),同时降低感知工作量(例如挫败感从5.4/7.0降至2.4/7.0)。