SWE-Bench-Verified, a dataset comprising 500 issues, serves as a de facto benchmark for evaluating various large language models (LLMs) on their ability to resolve GitHub issues. But this benchmark may overlap with model training data. If that is true, scores may reflect training recall, not issue-solving skill. To study this, we test two Claude models that frequently appear in top-performing agents submitted to the benchmark. We ask them to find relevant files using only issue text, and then issue text plus file paths. We then run the same setup on BeetleBox and SWE-rebench. Despite both benchmarks involving popular open-source Python projects, models performed 3 times better on SWE-Bench-Verified. They were also 6 times better at finding edited files, without any additional context about the projects themselves. This gap suggests the models may have seen many SWE-Bench-Verified tasks during training. As a result, scores on this benchmark may not reflect an agent's ability to handle real software issues, yet it continues to be used in ways that can misrepresent progress and lead to choices that favour agents that use certain models over strong agent design. Our setup tests the localization step with minimal context to the extent that the task should be logically impossible to solve. Our results show the risk of relying on older popular benchmarks and support the shift toward newer datasets built with contamination in mind.
翻译:SWE-Bench-Verified 是一个包含 500 个问题的数据集,作为评估各类大语言模型解决 GitHub 问题能力的实际基准。但该基准可能与模型训练数据存在重叠。若确实如此,其评分反映的可能是训练记忆而非问题解决技能。为探究此问题,我们测试了在提交至该基准的高性能智能体中频繁出现的两个 Claude 模型。我们要求它们仅依据问题文本定位相关文件,随后再结合问题文本与文件路径进行定位。接着,我们在 BeetleBox 和 SWE-rebench 上运行相同实验设置。尽管两个基准均涉及流行的开源 Python 项目,模型在 SWE-Bench-Verified 上的表现却提升了 3 倍。在没有任何额外项目上下文的情况下,其定位被修改文件的能力也提高了 6 倍。这种差距表明模型可能在训练过程中已接触过大量 SWE-Bench-Verified 任务。因此,该基准的评分可能无法反映智能体处理实际软件问题的能力,但其仍被以可能误导进展评估的方式使用,并导致倾向于选择采用特定模型的智能体而非优秀智能体设计。我们的实验设置以最小化上下文测试定位步骤,其任务在逻辑上本应无法解决。研究结果揭示了依赖陈旧流行基准的风险,并支持向注重数据污染控制的新型数据集转型。