A widespread practice in software development is to tailor coding agents to repositories using context files, such as AGENTS.md, by either manually or automatically generating them. Although this practice is strongly encouraged by agent developers, there is currently no rigorous investigation into whether such context files are actually effective for real-world tasks. In this work, we study this question and evaluate coding agents' task completion performance in two complementary settings: established SWE-bench tasks from popular repositories, with LLM-generated context files following agent-developer recommendations, and a novel collection of issues from repositories containing developer-committed context files. Across multiple coding agents and LLMs, we find that context files tend to reduce task success rates compared to providing no repository context, while also increasing inference cost by over 20%. Behaviorally, both LLM-generated and developer-provided context files encourage broader exploration (e.g., more thorough testing and file traversal), and coding agents tend to respect their instructions. Ultimately, we conclude that unnecessary requirements from context files make tasks harder, and human-written context files should describe only minimal requirements.
翻译:软件开发中一个普遍的做法是通过手动或自动生成上下文文件(例如AGENTS.md)来使编码智能体适配特定仓库。尽管智能体开发者强烈推荐这种做法,但目前尚未有严格研究来验证此类上下文文件在实际任务中是否真正有效。在本工作中,我们研究了这一问题,并在两种互补场景中评估了编码智能体的任务完成性能:一是基于流行仓库的成熟SWE-bench任务,采用遵循智能体开发者建议的LLM生成上下文文件;二是收集了包含开发者提交的上下文文件的仓库中的新问题集。通过测试多种编码智能体和LLM,我们发现与不提供仓库上下文相比,上下文文件往往会降低任务成功率,同时使推理成本增加超过20%。从行为模式看,无论是LLM生成的还是开发者提供的上下文文件都会促使更广泛的探索(例如更彻底的测试和文件遍历),且编码智能体倾向于遵循其指令。最终我们得出结论:上下文文件中不必要的需求会使任务变得更困难,人工编写的上下文文件应仅描述最低限度的需求。