Test smells are coding issues that typically arise from inadequate practices, a lack of knowledge about effective testing, or deadline pressures to complete projects. The presence of test smells can negatively impact the maintainability and reliability of software. While there are tools that use advanced static analysis or machine learning techniques to detect test smells, these tools often require effort to be used. This study aims to evaluate the capability of Large Language Models (LLMs) in automatically detecting test smells. We evaluated ChatGPT-4, Mistral Large, and Gemini Advanced using 30 types of test smells across codebases in seven different programming languages collected from the literature. ChatGPT-4 identified 21 types of test smells. Gemini Advanced identified 17 types, while Mistral Large detected 15 types of test smells. Conclusion: The LLMs demonstrated potential as a valuable tool in identifying test smells.
翻译:测试异味是指通常由于不当实践、缺乏有效测试知识或项目截止期限压力而产生的编码问题。测试异味的存在会对软件的可维护性和可靠性产生负面影响。尽管已有工具利用先进的静态分析或机器学习技术来检测测试异味,但这些工具通常需要投入精力才能使用。本研究旨在评估大型语言模型(LLMs)在自动检测测试异味方面的能力。我们使用从文献中收集的七种不同编程语言代码库中的30种测试异味类型,对ChatGPT-4、Mistral Large和Gemini Advanced进行了评估。ChatGPT-4识别出21种测试异味类型,Gemini Advanced识别出17种类型,而Mistral Large检测到15种测试异味类型。结论:大型语言模型展现出作为识别测试异味的有价值工具的潜力。