With reasoning language models such as OpenAI-o3 and DeepSeek-R1 emerging, large language models (LLMs) have entered a new phase of development. However, existing benchmarks for coding evaluation are gradually inadequate to assess the capability of advanced LLMs in code reasoning. To bridge the gap for high-level code reasoning assessment, we propose ProBench to benchmark LLMs in competitive programming, drawing inspiration from the International Collegiate Programming Contest. ProBench collects a comprehensive set of competitive programming problems from Codeforces, Luogu, and Nowcoder platforms during the period from July to December 2024, obtaining real test results through online submissions to ensure the fairness and accuracy of the evaluation. We establish a unified problem attribute system, including difficulty grading and algorithm tagging. With carefully collected and annotated data in ProBench, we systematically assess 9 latest LLMs in competitive programming across multiple dimensions, including thought chain analysis, error type diagnosis, and reasoning depth evaluation. Experimental results show that QwQ-32B-Preview achieves the best score of 20.93 followed by DeepSeek-V3 with a score of 16.38, suggesting that models trained with specialized reasoning tasks significantly outperform general-purpose models (even larger than reasoning-oriented models) in programming. Further analysis also reveals key areas for programming capability enhancement, e.g., algorithm adaptability and reasoning sufficiency, providing important insights for the future development of reasoning models.
翻译:随着OpenAI-o3和DeepSeek-R1等推理语言模型的出现,大语言模型(LLMs)已进入新的发展阶段。然而,现有的编码评估基准已逐渐不足以评估先进LLM在代码推理方面的能力。为填补高水平代码推理评估的空白,我们借鉴国际大学生程序设计竞赛(ICPC)的理念,提出ProBench以基准测试LLM在竞技编程中的表现。ProBench收集了2024年7月至12月期间来自Codeforces、洛谷和牛客平台的综合性竞技编程题目,并通过在线提交获取真实测试结果,确保评估的公平性与准确性。我们建立了统一的问题属性体系,包括难度分级和算法标签。基于ProBench精心收集和标注的数据,我们从思维链分析、错误类型诊断和推理深度评估等多个维度,系统评估了9个最新LLM在竞技编程中的表现。实验结果表明,QwQ-32B-Preview以20.93分获得最高分,DeepSeek-V3以16.38分紧随其后,这表明经过专项推理任务训练的模型在编程能力上显著优于通用模型(甚至超越规模更大的推理导向模型)。进一步分析还揭示了编程能力提升的关键领域,例如算法适应性和推理充分性,为推理模型的未来发展提供了重要启示。