Large Language Models (LLMs) are increasingly entering specialized, safety-critical engineering workflows governed by strict quantitative standards and immutable physical laws, making rigorous evaluation of their reasoning capabilities imperative. However, existing benchmarks such as MMLU, MATH, and HumanEval assess isolated cognitive skills, failing to capture the physically grounded reasoning central to engineering, where scientific principles, quantitative modeling, and practical constraints must converge. To enable verifiable process supervision in engineering, we introduce EngTrace, a symbolic benchmark comprising 90 templates across three major engineering branches, nine core domains and 20 distinct areas. Through domain-aware parameterization, we generate 1,350 unique, contamination-resistant test cases to stress-test generalization. Moving beyond outcome matching, we introduce a verifiable two-stage evaluation framework that uses a tiered protocol to validate intermediate reasoning traces alongside final answers through automated procedural checks and a heterogeneous AI Tribunal. Our evaluation of 24 leading LLMs reveals a distinct trade-off between numeric precision and trace fidelity, identifying a complexity cliff where abstract mathematical pre-training fails to translate into the integrative reasoning required for advanced engineering tasks.
翻译:大型语言模型(LLMs)正日益进入受严格量化标准与不可变物理定律约束的、专业化的安全关键工程工作流,这使得对其推理能力进行严格评估变得至关重要。然而,现有基准(如MMLU、MATH和HumanEval)仅评估孤立的认知技能,未能捕捉工程领域核心的、基于物理的推理过程——该过程要求科学原理、量化建模与实践约束必须协同作用。为实现工程领域的可验证过程监督,我们提出了EngTrace,这是一个符号化基准,包含三大工程分支、九个核心领域及二十个不同方向的90个模板。通过领域感知的参数化方法,我们生成了1,350个独特的、抗数据污染的测试案例,以压力测试模型的泛化能力。我们超越了结果匹配的范式,引入了一个可验证的两阶段评估框架:该框架采用分层协议,通过自动化流程检查与异构AI评审团,对中间推理轨迹与最终答案进行同步验证。对24个领先LLM的评估揭示了数值精度与轨迹保真度之间的显著权衡,并识别出一个复杂性临界点——在该临界点之外,抽象的数学预训练无法转化为高级工程任务所需的整合性推理能力。