Automated test-generation research overwhelmingly assumes the correctness of focal methods, yet practitioners routinely face non-regression scenarios where the focal method may be defective. A baseline evaluation of EvoSuite and two leading Large Language Model (LLM)-based generators, namely ChatTester and ChatUniTest, on defective focal methods reveals that despite achieving up to 83% of branch coverage, none of the generated tests expose defects. To resolve this problem, we first construct two new benchmarks, namely Defects4J-Desc and QuixBugs-Desc, for experiments. In particular, each focal method is equipped with an extra Natural Language Description (NLD) for code functionality understanding. Subsequently, we propose DISTINCT, a Description-guided, branch-consistency analysis framework that transforms LLMs into fault-aware test generators. DISTINCT carries three iterative components: (1) a Generator that derives initial tests based on the NLDs and the focal method, (2) a Validator that iteratively fixes uncompilable tests using compiler diagnostics, and (3) an Analyzer that iteratively aligns test behavior with NLD semantics via branch-level analysis. Extensive experiments confirm the effectiveness of our approach. Compared to state-of-the-art methods, DISTINCT achieves an average improvement of 14.64% in Compilation Success Rate (CSR) and 6.66% in Passing Rate (PR) across both benchmarks. It notably enhances Defect Detection Rate (DDR) on both benchmarks, with a particularly significant gain of 149.26% observed on Defects4J-Desc. In terms of code coverage, DISTINCT improves Statement Coverage (SC) by an average of 3.77% and Branch Coverage (BC) by 5.36%. These results set a new baseline for non-regressive test generation and highlight how description-driven reasoning enables LLMs to move beyond coverage chasing toward effective defect detection.
翻译:自动化测试生成研究普遍假设被测方法的正确性,然而实践者经常面临被测方法可能存在缺陷的非回归场景。对EvoSuite及两个领先的基于大语言模型(LLM)的生成器(即ChatTester和ChatUniTest)在缺陷被测方法上的基线评估表明,尽管分支覆盖率最高可达83%,但生成的测试均未能暴露缺陷。为解决此问题,我们首先构建了两个新基准数据集Defects4J-Desc和QuixBugs-Desc用于实验。特别地,每个被测方法都配备了用于代码功能理解的自然语言描述(NLD)。随后,我们提出了DISTINCT——一种描述引导的分支一致性分析框架,将LLM转化为具备缺陷感知能力的测试生成器。DISTINCT包含三个迭代组件:(1)基于NLD和被测方法生成初始测试的生成器;(2)利用编译器诊断信息迭代修复不可编译测试的验证器;(3)通过分支级分析迭代对齐测试行为与NLD语义的分析器。大量实验证实了我们方法的有效性。与最先进方法相比,DISTINCT在两个基准数据集上平均提升了14.64%的编译成功率(CSR)和6.66%的通过率(PR)。它在两个基准上显著提升了缺陷检测率(DDR),其中在Defects4J-Desc上观察到149.26%的显著增益。在代码覆盖率方面,DISTINCT将语句覆盖率(SC)平均提升3.77%,分支覆盖率(BC)提升5.36%。这些结果为非回归测试生成设立了新基准,并揭示了描述驱动推理如何使LLM超越覆盖率追逐,实现有效的缺陷检测。