Evaluating large language models (LLMs) on final-answer correctness is the dominant paradigm. This approach, however, provides a coarse signal for model improvement and overlooks the quality of the underlying reasoning process. We argue that a more granular evaluation of reasoning offers a more effective path to building robust models. We decompose reasoning quality into two dimensions: relevance and coherence. Relevance measures if a step is grounded in the problem; coherence measures if it follows logically from prior steps. To measure these aspects reliably, we introduce causal stepwise evaluation (CaSE). This method assesses each reasoning step using only its preceding context, which avoids hindsight bias. We validate CaSE against human judgments on our new expert-annotated benchmarks, MRa-GSM8K and MRa-MATH. More importantly, we show that curating training data with CaSE-evaluated relevance and coherence directly improves final task performance. Our work provides a scalable framework for analyzing, debugging, and improving LLM reasoning, demonstrating the practical value of moving beyond validity checks.
翻译:当前评估大型语言模型(LLM)的主流范式仅关注最终答案的正确性。然而,这种方法提供的模型改进信号较为粗糙,且忽视了底层推理过程的质量。我们认为,对推理进行更细粒度的评估为构建稳健模型提供了更有效的路径。我们将推理质量分解为两个维度:相关性与连贯性。相关性衡量推理步骤是否基于问题本身;连贯性衡量其是否在逻辑上遵循先前步骤。为可靠度量这些维度,我们提出了因果逐步评估方法(CaSE)。该方法仅使用每个推理步骤的前序上下文进行评估,从而避免了后见之明偏差。我们在新构建的专家标注基准数据集MRa-GSM8K和MRa-MATH上,通过人工评判验证了CaSE的有效性。更重要的是,我们证明利用CaSE评估的相关性与连贯性指标筛选训练数据,能直接提升最终任务性能。本研究为分析、调试和改进LLM推理提供了一个可扩展的框架,证明了超越简单有效性检验的实践价值。