Software testing is crucial for ensuring the correctness and reliability of software systems. Automated generation of issue reproduction tests from natural language issue descriptions enhances developer productivity by simplifying root cause analysis, promotes test-driven development -- "test first, write code later", and can be used for improving the effectiveness of automated issue resolution systems like coding agents. Existing methods proposed for this task predominantly rely on closed-source LLMs, with limited exploration of open models. To address this, we propose SWE-Tester -- a novel pipeline for training open-source LLMs to generate issue reproduction tests. First, we curate a high-quality training dataset of 41K instances from 2.6K open-source GitHub repositories and use it to train LLMs of varying sizes and families. The fine-tuned models achieve absolute improvements of up to 10\% in success rate and 21\% in change coverage on SWT-Bench Verified. Further analysis shows consistent improvements with increased inference-time compute, more data, and larger models. These results highlight the effectiveness of our framework for advancing open-source LLMs in this domain.
翻译:软件测试对于确保软件系统的正确性和可靠性至关重要。从自然语言问题描述自动生成问题复现测试,能够通过简化根因分析来提高开发者的生产力,促进“测试先行,编写代码在后”的测试驱动开发,并可用于提升如编码智能体等自动化问题解决系统的效能。针对此任务提出的现有方法主要依赖于闭源大语言模型,对开源模型的探索有限。为此,我们提出了SWE-Tester——一种用于训练开源大语言模型以生成问题复现测试的新颖流程。首先,我们从2.6K个开源GitHub仓库中精心整理了一个包含41K个实例的高质量训练数据集,并用其训练了不同规模和系列的大语言模型。微调后的模型在SWT-Bench Verified基准上,成功率绝对提升最高达10%,变更覆盖率绝对提升最高达21%。进一步分析表明,增加推理时计算资源、使用更多数据以及采用更大模型均能带来一致的性能提升。这些结果凸显了我们所提框架在推动该领域开源大语言模型发展方面的有效性。