Unit testing is an essential activity in software development for verifying the correctness of software components. However, manually writing unit tests is challenging and time-consuming. The emergence of Large Language Models (LLMs) offers a new direction for automating unit test generation. Existing research primarily focuses on closed-source LLMs (e.g., ChatGPT and CodeX) with fixed prompting strategies, leaving the capabilities of advanced open-source LLMs with various prompting settings unexplored. Particularly, open-source LLMs offer advantages in data privacy protection and have demonstrated superior performance in some tasks. Moreover, effective prompting is crucial for maximizing LLMs' capabilities. In this paper, we conduct the first empirical study to fill this gap, based on 17 Java projects, five widely-used open-source LLMs with different structures and parameter sizes, and comprehensive evaluation metrics. Our findings highlight the significant influence of various prompt factors, show the performance of open-source LLMs compared to the commercial GPT-4 and the traditional Evosuite, and identify limitations in LLM-based unit test generation. We then derive a series of implications from our study to guide future research and practical use of LLM-based unit test generation.
翻译:单元测试是软件开发中验证软件组件正确性的关键活动。然而,手动编写单元测试具有挑战性且耗时。大语言模型(LLMs)的出现为自动化单元测试生成提供了新方向。现有研究主要集中于采用固定提示策略的闭源LLMs(如ChatGPT和CodeX),而对具备不同提示设置的高级开源LLMs的能力尚未充分探索。特别是,开源LLMs在数据隐私保护方面具有优势,并在某些任务中展现出卓越性能。此外,有效的提示策略对于最大化LLMs的能力至关重要。本文基于17个Java项目、五种具有不同结构和参数规模的常用开源LLMs,以及综合评估指标,开展了填补该空白的首次实证研究。我们的研究结果揭示了各类提示因素的重要影响,展示了开源LLMs相较于商用GPT-4和传统Evosuite的性能表现,并识别了基于LLM的单元测试生成的局限性。最后,我们从研究中总结出一系列启示,以指导未来基于LLM的单元测试生成的研究与实践。