While Large Language Models (LLMs) have demonstrated significant advancements in reasoning and agent-based problem-solving, current evaluation methodologies fail to adequately assess their capabilities: existing benchmarks either rely on closed-ended questions prone to saturation and memorization, or subjective comparisons that lack consistency and rigor. In this work, we introduce HeuriGym, an agentic framework designed for evaluating heuristic algorithms generated by LLMs for combinatorial optimization problems, characterized by clearly defined objectives and expansive solution spaces. HeuriGym empowers LLMs to propose heuristics, receive evaluative feedback via code execution, and iteratively refine their solutions. We evaluate nine state-of-the-art models on nine problems across domains such as computer systems, logistics, and biology, exposing persistent limitations in tool use, planning, and adaptive reasoning. To quantify performance, we propose the Quality-Yield Index (QYI), a metric that captures both solution pass rate and quality. Even top models like GPT-o4-mini-high and Gemini-2.5-Pro attain QYI scores of only 0.6, well below the expert baseline of 1. Our open-source benchmark aims to guide the development of LLMs toward more effective and realistic problem-solving in scientific and engineering domains.
翻译:尽管大型语言模型(LLM)在推理和基于智能体的问题解决方面已展现出显著进步,但当前的评估方法未能充分衡量其能力:现有基准要么依赖易于饱和和记忆的封闭式问题,要么采用缺乏一致性与严谨性的主观比较。本研究提出了HeuriGym,这是一个用于评估LLM为组合优化问题生成启发式算法的智能体框架,其特点在于具有明确定义的目标和广阔的求解空间。HeuriGym使LLM能够提出启发式算法、通过代码执行接收评估反馈,并迭代优化其解决方案。我们在计算机系统、物流和生物学等领域的九个问题上评估了九个前沿模型,揭示了它们在工具使用、规划和自适应推理方面持续存在的局限性。为量化性能,我们提出了质量-产出指数(QYI),该指标同时捕捉解决方案的通过率与质量。即使是GPT-4o-mini-high和Gemini-2.5-Pro等顶级模型,其QYI得分也仅为0.6,远低于专家基线水平1。我们开源的基准旨在引导LLM朝着更有效、更贴近实际的方向发展,以应对科学与工程领域的问题。