Large Reasoning Models (LRMs) exhibit strong performance, yet often produce rationales that sound plausible but fail to reflect their true decision process, undermining reliability and trust. We introduce a formal framework for reasoning faithfulness, defined by two testable conditions: stance consistency (a coherent stance linking reasoning to answer) and causal influence (the stated reasoning causally drives the answer under output-level interventions), explicitly decoupled from accuracy. To operationalize this, we present RFEval, a benchmark of 7,186 instances across seven tasks that probes faithfulness via controlled, output-level counterfactual interventions. Evaluating twelve open-source LRMs, we find unfaithfulness in 49.7% of outputs, predominantly from stance inconsistency. Failures are concentrated in brittle, convergent domains such as math and code, and correlate more with post-training regimes than with scale: within-family ablations indicate that adding current RL-style objectives on top of supervised fine-tuning can reduce reasoning faithfulness, even when accuracy is maintained. Crucially, accuracy is neither a sufficient nor a reliable proxy for faithfulness: once controlling for model and task, the accuracy-faithfulness link is weak and statistically insignificant. Our work establishes a rigorous methodology for auditing LRM reliability and shows that trustworthy AI requires optimizing not only for correct outcomes but also for the structural integrity of the reasoning process. Our code and dataset can be found at project page: $\href{https://aidaslab.github.io/RFEval/}{https://aidaslab.github.io/RFEval/}$
翻译:大型推理模型(LRMs)展现出强大的性能,但常常产生听起来合理却未能反映其真实决策过程的推理依据,这削弱了其可靠性和可信度。我们引入了一个关于推理忠实性的形式化框架,该框架由两个可测试的条件定义:立场一致性(将推理与答案联系起来的连贯立场)和因果影响(在输出层面的干预下,所陈述的推理能因果驱动答案),并明确与准确性解耦。为使其可操作化,我们提出了RFEval基准测试,包含七个任务共7,186个实例,通过受控的输出层面反事实干预来探测忠实性。对十二个开源LRM的评估表明,49.7%的输出存在不忠实性,主要源于立场不一致。失败案例集中在脆弱、收敛的领域,如数学和代码,并且与后训练方案的相关性大于与模型规模的相关性:系列内消融实验表明,在监督微调基础上添加当前的强化学习风格目标可能会降低推理忠实性,即使准确性得以保持。关键的是,准确性既不是忠实性的充分条件,也不是其可靠代理:一旦控制了模型和任务变量,准确性与忠实性之间的关联微弱且在统计上不显著。我们的工作建立了一套严谨的审计LRM可靠性的方法,并表明可信赖的AI不仅需要优化结果的正确性,还需要优化推理过程的结构完整性。我们的代码和数据集可在项目页面找到:$\href{https://aidaslab.github.io/RFEval/}{https://aidaslab.github.io/RFEval/}$