Recent advancements in large language models (LLMs) have showcased their exceptional abilities across various tasks, such as code generation, problem-solving and reasoning. Existing benchmarks evaluate tasks in isolation, yet the extent to which LLMs can understand prose-style tasks, identify the underlying problems, and then generate appropriate code solutions is still unexplored. Addressing this gap, we introduce PECC, a novel benchmark derived from Advent Of Code (AoC) challenges and Project Euler, including 2396 problems. Unlike conventional benchmarks, PECC requires LLMs to interpret narrative-embedded problems, extract requirements, and generate executable code. A key feature of our dataset is the complexity added by natural language prompting in chat-based evaluations, mirroring real-world instruction ambiguities. Results show varying model performance between narrative and neutral problems, with specific challenges in the Euler math-based subset with GPT-3.5-Turbo passing 50% of the AoC challenges and only 8% on the Euler problems. By probing the limits of LLMs' capabilities, our benchmark provides a framework to monitor and assess the subsequent progress of LLMs as a universal problem solver.
翻译:大型语言模型(LLMs)的最新进展展示了其在代码生成、问题解决与推理等多种任务中的卓越能力。现有基准测试对任务进行孤立评估,但LLMs在理解散文式任务、识别潜在问题并生成相应代码解决方案方面的能力仍未得到充分探索。为填补这一空白,我们提出PECC——一个源自《代码降临》(AoC)挑战和欧拉计划(Project Euler)的新颖基准测试,包含2396个问题。与传统基准不同,PECC要求LLMs解读嵌入叙述的问题、提取需求并生成可执行代码。我们数据集的一个关键特征在于:基于聊天的评估中通过自然语言提示所增加的复杂性,模拟了现实世界中指令的歧义性。结果显示,模型在叙述性和中性问题上的表现存在差异,尤其在欧拉数学子集中面临特定挑战——GPT-3.5-Turbo通过50%的AoC挑战,但仅通过8%的欧拉问题。通过探明LLMs能力的边界,我们的基准测试为监控和评估LLMs作为通用问题求解器的后续进展提供了框架。