We introduce \CFE{} (\textbf{C}lassroom \textbf{F}inal \textbf{E}xam), a multimodal benchmark for evaluating the reasoning capabilities of large language models across more than 20 STEM domains. \CFE{} is curated from repeatedly used, authentic university homework and exam problems, together with reference solutions provided by course instructors. \CFE{} presents a significant challenge even for frontier models: the newly released Gemini-3.1-pro-preview achieves an overall accuracy of 59.69\%, while the second-best model, Gemini-3-flash-preview, reaches 55.46\%, leaving considerable room for improvement. Beyond leaderboard results, we perform a diagnostic analysis by decomposing reference solutions into reasoning flows. We find that although frontier models can often answer intermediate sub-questions correctly, they struggle to reliably derive and maintain correct intermediate states throughout multi-step solutions. We further observe that model-generated solutions typically have more reasoning steps than those provided by the instructor, indicating suboptimal step efficiency and a higher risk of error accumulation. The data and code are available at https://github.com/Analogy-AI/CFE_Bench.
翻译:我们推出 \CFE{}(\textbf{C}lassroom \textbf{F}inal \textbf{E}xam),这是一个用于评估大型语言模型在超过20个STEM领域推理能力的多模态基准。\CFE{} 来源于大学课程中反复使用的真实作业与考试题目,并附有授课教师提供的参考答案。\CFE{} 对前沿模型构成了显著挑战:新发布的 Gemini-3.1-pro-preview 总体准确率为 59.69\%,而排名第二的模型 Gemini-3-flash-preview 达到 55.46\%,仍有相当大的改进空间。除排行榜结果外,我们通过将参考答案分解为推理流程进行了诊断分析。我们发现,尽管前沿模型通常能正确回答中间子问题,但它们难以在多步求解过程中可靠地推导并维持正确的中间状态。我们进一步观察到,模型生成的解决方案通常比教师提供的方案包含更多推理步骤,这表明其步骤效率欠佳且错误累积风险更高。数据与代码可在 https://github.com/Analogy-AI/CFE_Bench 获取。