Competitive programming platforms like LeetCode, Codeforces, and HackerRank evaluate programming skills, often used by recruiters for screening. With the rise of advanced Large Language Models (LLMs) such as ChatGPT, Gemini, and Meta AI, their problem-solving ability on these platforms needs assessment. This study explores LLMs' ability to tackle diverse programming challenges across platforms with varying difficulty, offering insights into their real-time and offline performance and comparing them with human programmers. We tested 98 problems from LeetCode, 126 from Codeforces, covering 15 categories. Nine online contests from Codeforces and LeetCode were conducted, along with two certification tests on HackerRank, to assess real-time performance. Prompts and feedback mechanisms were used to guide LLMs, and correlations were explored across different scenarios. LLMs, like ChatGPT (71.43% success on LeetCode), excelled in LeetCode and HackerRank certifications but struggled in virtual contests, particularly on Codeforces. They performed better than users in LeetCode archives, excelling in time and memory efficiency but underperforming in harder Codeforces contests. While not immediately threatening, LLMs performance on these platforms is concerning, and future improvements will need addressing.
翻译:诸如LeetCode、Codeforces和HackerRank等竞争性编程平台常被招聘人员用于技能评估与筛选。随着ChatGPT、Gemini和Meta AI等先进大型语言模型(LLMs)的兴起,需要评估它们在这些平台上的问题解决能力。本研究通过考察LLMs在不同难度等级的编程平台上应对多样化编程挑战的能力,深入探究了其实时与离线性能,并与人类程序员进行对比。我们测试了来自LeetCode的98道题目和Codeforces的126道题目,涵盖15个类别。通过组织Codeforces和LeetCode的9场在线竞赛以及HackerRank的两项认证测试,评估了LLMs的实时表现。研究采用提示词与反馈机制引导LLMs,并探讨了不同场景下的性能相关性。结果显示,以ChatGPT(在LeetCode上成功率达71.43%)为代表的LLMs在LeetCode题库和HackerRank认证中表现优异,但在虚拟竞赛(尤其是Codeforces)中表现欠佳。相较于LeetCode题库用户,LLMs在时间和内存效率方面更具优势,但在高难度Codeforces竞赛中表现逊色。尽管LLMs目前尚未构成直接威胁,但其在编程平台上的表现已引起关注,未来的改进需着力解决现有不足。