Automatic unit test (UT) generation is essential for software quality assurance, but existing approaches--including symbolic execution, search-based approaches, and recent LLM-based generators--struggle to produce human-quality tests with correct, meaningful assertions and reliable chain-of-thought (CoT) explanations. We identify a gap in UT training data: repository-mined tests lack developer CoTs, while LLM-distilled CoTs are often incorrect or incomplete. To address this issue, we propose a novel data-distillation approach that uses self-debugging to produce high-quality UT training examples paired with faithful CoTs. Our approach combines (1) guided test repair, a heuristic loop (error-, failure-, and coverage-focused steps) that asks the used model to diagnose and iteratively fix generated tests, and (2) CoT compression, which compacts original and debugging CoTs into concise explanations that directly justify correct tests. We apply this pipeline to a large corpus of open-source projects to construct a dataset of 74,518 high-quality <focal method, test, CoT> examples, and then use it for supervised fine-tuning of a base model. An empirical evaluation shows that the fine-tuned model achieves high UT generation effectiveness: it attains a pass rate of 36.17% on test assertions, a branch coverage of 43.90%, and a mutation score of 88.66%, substantially higher than state-of-the-art commercial models like o4-mini.
翻译:自动单元测试生成对于软件质量保障至关重要,但现有方法——包括符号执行、基于搜索的方法以及近期基于大语言模型的生成器——难以生成具有正确且有意义断言、并附带可靠思维链解释的人类水平测试。我们发现了单元测试训练数据中的一个缺口:从代码库挖掘的测试缺乏开发者的思维链,而通过大语言模型提炼的思维链往往存在错误或不完整。为解决此问题,我们提出一种新颖的数据蒸馏方法,利用自调试技术生成与可靠思维链配对的高质量单元测试训练样本。该方法结合了(1)引导式测试修复——一种启发式循环(包含错误导向、失败导向和覆盖导向的步骤),要求所用模型诊断并迭代修复生成的测试,以及(2)思维链压缩——将原始及调试思维链压缩为直接论证正确测试的简洁解释。我们将此流程应用于大规模开源项目语料库,构建了包含74,518个高质量<焦点方法,测试,思维链>样本的数据集,并用于基础模型的监督微调。实证评估表明,微调后的模型在单元测试生成方面具有高效性:在测试断言上达到36.17%的通过率,分支覆盖率达到43.90%,变异测试得分达到88.66%,显著优于o4-mini等最先进的商业模型。