Large Language Models (LLMs) applied to code-related applications have emerged as a prominent field, attracting significant interest from both academia and industry. However, as new and improved LLMs are developed, existing evaluation benchmarks (e.g., HumanEval, MBPP) are no longer sufficient for assessing their capabilities. In this work, we propose LiveCodeBench, a comprehensive and contamination-free evaluation of LLMs for code, which continuously collects new problems over time from contests across three competition platforms, namely LeetCode, AtCoder, and CodeForces. Notably, our benchmark also focuses on a broader range of code related capabilities, such as self-repair, code execution, and test output prediction, beyond just code generation. Currently, LiveCodeBench hosts four hundred high-quality coding problems that were published between May 2023 and May 2024. We have evaluated 18 base LLMs and 34 instruction-tuned LLMs on LiveCodeBench. We present empirical findings on contamination, holistic performance comparisons, potential overfitting in existing benchmarks as well as individual model comparisons. We will release all prompts and model completions for further community analysis, along with a general toolkit for adding new scenarios and model
翻译:应用于代码相关任务的大语言模型已成为重要研究领域,受到学术界与工业界的广泛关注。然而,随着新型改进大语言模型的不断涌现,现有评估基准(如HumanEval、MBPP)已不足以全面衡量其能力。本研究提出LiveCodeBench——一个全面且无污染的代码大语言模型评估框架,该框架持续从LeetCode、AtCoder和CodeForces三大竞赛平台动态收集时序更新的编程题目。值得注意的是,本基准不仅关注代码生成能力,更涵盖自我修复、代码执行、测试输出预测等更广泛的代码相关能力。当前LiveCodeBench收录了2023年5月至2024年5月期间发布的四百道高质量编程题目。我们在该基准上评估了18个基础大语言模型与34个指令微调大语言模型,并就数据污染、综合性能比较、现有基准潜在过拟合问题及个体模型对比等方面呈现实证发现。我们将公开所有提示词与模型生成结果以供社区进一步分析,同时提供支持新场景与模型扩展的通用工具包。