Amidst the recent strides in evaluating Large Language Models for Code (Code LLMs), existing benchmarks have mainly focused on the functional correctness of generated code, neglecting the importance of their computational efficiency. To fill the gap, we present Mercury, the first code efficiency benchmark for Code LLMs. It comprises 1,889 Python tasks, each accompanied by adequate solutions that serve as real-world efficiency baselines, enabling a comprehensive analysis of the runtime distribution. Based on the distribution, we introduce a new metric Beyond, which computes a runtime-percentile-weighted Pass score to reflect functional correctness and code efficiency simultaneously. On Mercury, leading Code LLMs can achieve 65% on Pass, while less than 50% on Beyond. Given that an ideal Beyond score would be aligned with the Pass score, it indicates that while Code LLMs exhibit impressive capabilities in generating functionally correct code, there remains a notable gap in their efficiency. Finally, our empirical experiments reveal that Direct Preference Optimization (DPO) serves as a robust baseline for enhancing code efficiency compared with Supervised Fine Tuning (SFT), which paves a promising avenue for future exploration of efficient code generation. Our code and data are available on GitHub: https://github.com/Elfsong/Mercury.
翻译:在近期评估代码大语言模型(Code LLMs)的进展中,现有基准测试主要关注生成代码的功能正确性,而忽视了其计算效率的重要性。为填补这一空白,我们提出了首个面向代码大语言模型的代码效率基准测试 Mercury。它包含 1,889 个 Python 编程任务,每个任务均配有充足的解决方案作为真实世界的效率基线,从而支持对运行时分布的全面分析。基于该分布,我们提出了一种新指标 Beyond,该指标通过计算运行时百分位数加权的 Pass 分数,以同时反映功能正确性与代码效率。在 Mercury 基准上,领先的代码大语言模型 Pass 分数可达 65%,而 Beyond 分数则不足 50%。考虑到理想的 Beyond 分数应与 Pass 分数一致,这表明尽管代码大语言模型在生成功能正确的代码方面展现出卓越能力,但其效率仍存在显著差距。最后,我们的实证实验表明,与监督微调(SFT)相比,直接偏好优化(DPO)可作为提升代码效率的强基线,这为未来高效代码生成的探索开辟了有前景的途径。我们的代码与数据已在 GitHub 上开源:https://github.com/Elfsong/Mercury。