Large Language Models (LLMs) have shown significant potential in automating software engineering tasks, particularly in code generation. However, current evaluation benchmarks, which primarily focus on accuracy, fall short in assessing the quality of the code generated by these models, specifically their tendency to produce code smells. To address this limitation, we introduce CodeSmellEval, a benchmark designed to evaluate the propensity of LLMs for generating code smells. Our benchmark includes a novel metric: Propensity Smelly Score (PSC), and a curated dataset of method-level code smells: CodeSmellData. To demonstrate the use of CodeSmellEval, we conducted a case study with two state-of-the-art LLMs, CodeLlama and Mistral. The results reveal that both models tend to generate code smells, such as simplifiable-condition and consider-merging-isinstance. These findings highlight the effectiveness of our benchmark in evaluating LLMs, providing valuable insights into their reliability and their propensity to introduce code smells in code generation tasks.
翻译:大型语言模型(LLMs)在自动化软件工程任务中展现出巨大潜力,尤其在代码生成方面。然而,当前主要关注准确性的评估基准在评估这些模型生成代码的质量方面存在不足,特别是对其产生代码异味的倾向性评估。为弥补这一局限,我们提出了CodeSmellEval——一个专门用于评估LLMs生成代码异味倾向性的基准测试框架。该基准包含一项新颖的度量指标:异味倾向评分(PSC),以及一个精心构建的方法级代码异味数据集:CodeSmellData。为展示CodeSmellEval的应用价值,我们对两个前沿LLMs(CodeLlama和Mistral)进行了案例研究。结果表明,两种模型均倾向于生成如simplifiable-condition和consider-merging-isinstance等代码异味。这些发现证实了本基准测试在评估LLMs方面的有效性,为理解其在代码生成任务中的可靠性及引入代码异味的倾向性提供了重要见解。