Large Language Models (LLMs) have recently been used to generate mutants in both research work and in industrial practice. However, there has been no comprehensive empirical study of their performance for this increasingly important LLM-based Software Engineering application. To address this, we conduct a comprehensive empirical study evaluating BugFarm and LLMorpheus (the two state-of-the-art LLM-based approaches), alongside seven LLMs using our newly designed prompt, including both leading open- and closed-source models, on 851 real bugs from two Java real-world bug benchmarks. Our results reveal that, compared to existing rule-based approaches, LLMs generate more diverse mutants, that are behaviorally closer to real bugs and, most importantly, with 111.29% higher fault detection. That is, 87.98% (for LLMs) vs. 41.64% (for rule-based); an increase of 46.34 percentage points. Nevertheless, our results also reveal that these impressive results for improved effectiveness come at a cost: the LLM-generated mutants have worse non-compilability, duplication, and equivalent mutant rates by 26.60, 10.14, and 3.51 percentage points, respectively. These findings are immediately actionable for both research and practice. They allow practitioners to have greater confidence in deploying LLM-based mutation, while researchers now have a baseline for the state-of-the-art, with which they can research techniques to further improve effectiveness and reduce cost.
翻译:大语言模型(LLMs)近年来已被广泛应用于研究和工业实践中以生成变异体。然而,针对这一日益重要的大语言模型软件工程应用,目前尚缺乏对其性能的全面实证研究。为此,我们开展了一项综合性实证研究,评估了BugFarm和LLMorpheus(两种基于大语言模型的最先进方法)以及使用我们新设计提示词的七个大语言模型(包括领先的开源和闭源模型),在来自两个Java真实缺陷基准集的851个真实缺陷上进行测试。我们的研究结果表明,与现有的基于规则的方法相比,大语言模型能够生成更多样化的变异体,这些变异体在行为上更接近真实缺陷,最重要的是,其缺陷检测率提高了111.29%。具体而言,大语言模型的缺陷检测率为87.98%,而基于规则的方法为41.64%,提升了46.34个百分点。然而,我们的结果也显示,这些令人印象深刻的效果提升是有代价的:大语言模型生成的变异体在不可编译率、重复率和等价变异率方面分别恶化了26.60、10.14和3.51个百分点。这些发现对研究和实践具有直接的指导意义。它们使从业者能够更有信心地部署基于大语言模型的变异测试,同时为研究人员提供了当前最先进技术的基准,使其能够进一步研究提升效果和降低成本的优化技术。