The instruction-following ability of Large Language Models (LLMs) has cultivated a class of LLM-based systems capable of approaching complex tasks such as making edits to large code repositories. Due to the high sensitivity and unpredictability of LLM behavior in response to changes in prompting, robust evaluation tools are needed to drive future iteration of these systems. We propose RES-Q, a natural language instruction-based benchmark for evaluating $\textbf{R}$epository $\textbf{E}$diting $\textbf{S}$ystems, which consists of 100 repository editing tasks derived from real GitHub commits. Given an edit instruction and a code repository, RES-Q evaluates an LLM system's ability to gather information and construct an edit that satisfies the criteria set by the instruction. We argue that evaluating LLMs in this way addresses issues with traditional benchmarks and provides a more holistic assessment of a model's abilities. We evaluate various state-of-the-art LLMs as language agents in a repository-editing system built on Qurrent OS, our language agent development software. Despite their 1% pass@1 performance difference on HumanEval, we find Claude Sonnet 3.5 outperforms GPT-4o by 12% pass@1 on RES-Q, indicating RES-Q's capacity to differentiate model capability as traditional benchmarks approach saturation. We further investigate token efficiency, performance relationships with existing benchmarks, and interesting disparities between closed and open-source LLMs. Code and dataset are available at https://github.com/Qurrent-AI/RES-Q.
翻译:大语言模型(LLM)的指令跟随能力催生了一类基于LLM的系统,能够处理诸如对大型代码仓库进行编辑等复杂任务。由于LLM行为对提示词变化具有高度敏感性和不可预测性,需要稳健的评估工具来推动这些系统的迭代发展。我们提出RES-Q,一个基于自然语言指令的基准测试,用于评估$\textbf{R}$epository $\textbf{E}$diting $\textbf{S}$ystems(仓库编辑系统),该基准包含100个从真实GitHub提交记录中提取的仓库编辑任务。给定编辑指令和代码仓库,RES-Q评估LLM系统收集信息并构建满足指令设定标准的编辑操作的能力。我们认为,以此方式评估LLM能够解决传统基准测试的局限性,并提供对模型能力的更全面评估。我们在基于Qurrent OS(我们的语言智能体开发软件)构建的仓库编辑系统中,评估了多种最先进的LLM作为语言智能体的表现。尽管在HumanEval基准上仅存在1%的pass@1性能差异,但我们发现Claude Sonnet 3.5在RES-Q上的pass@1表现比GPT-4o高出12%,这表明当传统基准趋近饱和时,RES-Q具备区分模型能力的效能。我们进一步研究了令牌效率、与现有基准的性能关联性,以及闭源与开源LLM之间值得关注的差异。代码与数据集可在https://github.com/Qurrent-AI/RES-Q获取。