Diffusion large language models (dLLMs) enable parallel generation and are promising for unit test generation (UTG), where efficient and large-scale automated testing is essential in software development. Despite this advantage, their application to UTG is still constrained by a clear trade-off between efficiency and test quality, since increasing the number of tokens generated in each step often causes a sharp decline in the quality of test cases. To overcome this limitation, we present DiffuTester, an acceleration framework specifically tailored for dLLMs in UTG. The motivation of DiffuTester is that unit tests targeting the same focal method often share structural patterns. DiffuTester employs a novel structural pattern based decoding approach, which dynamically identifies structural patterns across unit tests through their abstract syntax trees and additionally decodes the corresponding tokens, thereby achieving acceleration without compromising the quality of the output. To enable comprehensive evaluation, we extend the original TestEval benchmark to three programming languages. Extensive experiments on three benchmarks with two representative models show that DiffuTester delivers significant acceleration while preserving test coverage. Moreover, DiffuTester generalizes well across different dLLMs and programming languages, providing a practical and scalable solution for efficient UTG in software development. Code and data are publicly available at https://github.com/TsinghuaISE/DiffuTester.
翻译:扩散大语言模型(dLLMs)支持并行生成,在单元测试生成(UTG)领域前景广阔,其中高效、大规模的自动化测试对于软件开发至关重要。尽管具备这一优势,dLLMs在UTG中的应用仍受限于效率与测试质量之间的明显权衡,因为增加每步生成的令牌数往往导致测试用例质量急剧下降。为克服这一限制,我们提出了DiffuTester,一个专为UTG中dLLMs设计的加速框架。DiffuTester的动机在于,针对同一焦点方法的单元测试通常共享结构模式。DiffuTester采用了一种新颖的基于结构模式的解码方法,该方法通过抽象语法树动态识别跨单元测试的结构模式,并额外解码相应的令牌,从而在不影响输出质量的前提下实现加速。为进行全面评估,我们将原始的TestEval基准扩展至三种编程语言。在三个基准上使用两个代表性模型进行的大量实验表明,DiffuTester在保持测试覆盖度的同时实现了显著加速。此外,DiffuTester在不同dLLMs和编程语言间具有良好的泛化能力,为软件开发中的高效UTG提供了一个实用且可扩展的解决方案。代码与数据公开于https://github.com/TsinghuaISE/DiffuTester。