Code generation models have increasingly become integral to aiding software development. Although current research has thoroughly examined the correctness of the code produced by code generation models, a vital aspect that plays a pivotal role in green computing and sustainability efforts has often been neglected. This paper presents EffiBench, a benchmark with 1,000 efficiency-critical coding problems to assess the efficiency of code generated by code generation models. EffiBench contains a diverse set of LeetCode coding problems. Each problem is paired with an executable human-written canonical solution, which obtains the SOTA efficiency on the LeetCode solution leaderboard. With EffiBench, we empirically examine the ability of 42 large language models (35 open-source and 7 closed-source) to generate efficient code. Our evaluation results demonstrate that the efficiency of the code generated by LLMs is generally worse than the efficiency of human-written canonical solutions. For example, GPT-4 generated code has an average \textbf{3.12} times execution time that of the human-written canonical solutions. In the most extreme cases, the execution time and total memory usage of GPT-4 generated code are \textbf{13.89} and \textbf{43.92} times that of the canonical solutions. The source code of EffiBench is released on https://github.com/huangd1999/EffiBench. We also provide the LeaderBoard at https://huggingface.co/spaces/EffiBench/effibench-leaderboard.
翻译:代码生成模型日益成为辅助软件开发的重要组成部分。尽管当前研究已深入探讨了代码生成模型所生成代码的正确性,但一个对绿色计算和可持续发展至关重要的方面却常被忽视。本文提出了EffiBench,这是一个包含1,000个效率关键型编程问题的基准测试集,用于评估代码生成模型所生成代码的效率。EffiBench包含多样化的LeetCode编程问题。每个问题均配有一个可执行的人工编写规范解决方案,该方案在LeetCode解决方案排行榜上达到了最优效率。基于EffiBench,我们实证检验了42个大型语言模型(35个开源模型和7个闭源模型)生成高效代码的能力。评估结果表明,LLMs生成的代码效率通常低于人工编写的规范解决方案。例如,GPT-4生成代码的平均执行时间是人工编写规范解决方案的\textbf{3.12}倍。在最极端的情况下,GPT-4生成代码的执行时间和总内存使用量分别达到规范解决方案的\textbf{13.89}倍和\textbf{43.92}倍。EffiBench的源代码发布于https://github.com/huangd1999/EffiBench。我们同时在https://huggingface.co/spaces/EffiBench/effibench-leaderboard提供了排行榜。