Large Language Models (LLMs) have shown significant potential in automating software engineering tasks, particularly in code generation. However, current evaluation benchmarks, which primarily focus on accuracy, fall short in assessing the quality of the code generated by these models, specifically their tendency to produce code smells. To address this limitation, we introduce CodeSmellEval, a benchmark designed to evaluate the propensity of LLMs for generating code smells. Our benchmark includes a novel metric: Propensity Smelly Score (PSC), and a curated dataset of method-level code smells: CodeSmellData. To demonstrate the use of CodeSmellEval, we conducted a case study with two state-of-the-art LLMs, CodeLlama and Mistral. The results reveal that both models tend to generate code smells, such as simplifiable-condition and consider-merging-isinstance. These findings highlight the effectiveness of our benchmark in evaluating LLMs, providing valuable insights into their reliability and their propensity to introduce code smells in code generation tasks.
翻译:大型语言模型(LLMs)在自动化软件工程任务方面展现出巨大潜力,特别是在代码生成领域。然而,当前主要关注准确性的评估基准在评估这些模型生成代码的质量方面存在不足,尤其未能充分考察其产生代码坏味的倾向性。为弥补这一缺陷,我们提出了CodeSmellEval基准,旨在评估LLMs生成代码坏味的倾向性。该基准包含一个新颖的度量指标:倾向性坏味评分(PSC),以及一个精心构建的方法级代码坏味数据集:CodeSmellData。为展示CodeSmellEval的实用性,我们对两个前沿LLMs(CodeLlama和Mistral)进行了案例研究。结果表明,两种模型均倾向于生成如simplifiable-condition和consider-merging-isinstance等代码坏味。这些发现证明了我们基准在评估LLMs方面的有效性,为理解其在代码生成任务中的可靠性及引入代码坏味的倾向性提供了重要见解。