Mutation testing is vital for ensuring software quality. However, the presence of equivalent mutants is known to introduce redundant cost and bias issues, hindering the effectiveness of mutation testing in practical use. Although numerous equivalent mutant detection (EMD) techniques have been proposed, they exhibit limitations due to the scarcity of training data and challenges in generalizing to unseen mutants. Recently, large language models (LLMs) have been extensively adopted in various code-related tasks and have shown superior performance by more accurately capturing program semantics. Yet the performance of LLMs in equivalent mutant detection remains largely unclear. In this paper, we conduct an empirical study on 3,302 method-level Java mutant pairs to comprehensively investigate the effectiveness and efficiency of LLMs for equivalent mutant detection. Specifically, we assess the performance of LLMs compared to existing EMD techniques, examine the various strategies of LLMs, evaluate the orthogonality between EMD techniques, and measure the time overhead of training and inference. Our findings demonstrate that LLM-based techniques significantly outperform existing techniques (i.e., the average improvement of 35.69% in terms of F1-score), with the fine-tuned code embedding strategy being the most effective. Moreover, LLM-based techniques offer an excellent balance between cost (relatively low training and inference time) and effectiveness. Based on our findings, we further discuss the impact of model size and embedding quality, and provide several promising directions for future research. This work is the first to examine LLMs in equivalent mutant detection, affirming their effectiveness and efficiency.
翻译:变异测试对于确保软件质量至关重要。然而,等价变异体的存在会引入冗余成本和偏差问题,阻碍了变异测试在实际应用中的有效性。尽管已有众多等价变异体检测(EMD)技术被提出,但由于训练数据稀缺以及泛化到未见变异体存在挑战,这些技术表现出局限性。近年来,大型语言模型(LLMs)已被广泛应用于各种代码相关任务,并通过更准确地捕捉程序语义展现出优越性能。然而,LLMs在等价变异体检测中的性能在很大程度上仍不明确。本文通过对3,302个方法级Java变异体对进行实证研究,全面探讨了LLMs在等价变异体检测中的有效性和效率。具体而言,我们评估了LLMs与现有EMD技术相比的性能,检验了LLMs的各种策略,评估了EMD技术之间的正交性,并测量了训练和推理的时间开销。我们的研究结果表明,基于LLM的技术显著优于现有技术(即F1分数平均提升35.69%),其中微调的代码嵌入策略最为有效。此外,基于LLM的技术在成本(相对较低的训练和推理时间)和有效性之间提供了极佳的平衡。基于我们的发现,我们进一步讨论了模型规模和嵌入质量的影响,并为未来研究提供了几个有前景的方向。本工作是首次在等价变异体检测中检验LLMs,证实了其有效性和效率。