Deploying small language models (7-9B parameters) as autonomous agents requires trust in their reasoning, not just their outputs. We reveal a critical reliability crisis: 50-69\% of correct answers from these models contain fundamentally flawed reasoning -- a ``Right-for-Wrong-Reasons'' phenomenon invisible to standard accuracy metrics. Through analysis of 10,734 reasoning traces across three models and diverse tasks, we introduce the Reasoning Integrity Score (RIS), a process-based metric validated with substantial inter-rater agreement ($κ=0.657$). Conventional practices are challenged by our findings: while retrieval-augmented generation (RAG) significantly improves reasoning integrity (Cohen's $d=0.23$--$0.93$), meta-cognitive interventions like self-critique often harm performance ($d=-0.14$ to $-0.33$) in small models on the evaluated tasks. Mechanistic analysis reveals RAG succeeds by grounding calculations in external evidence, reducing errors by 7.6\%, while meta-cognition amplifies confusion without sufficient model capacity. To enable deployment, verification capabilities are distilled into a neural classifier achieving 0.86 F1-score with 100$\times$ speedup. These results underscore the necessity of process-based verification for trustworthy agents: accuracy alone is dangerously insufficient when models can be right for entirely wrong reasons.
翻译:部署小型语言模型(7-9B参数)作为自主智能体时,需要对其推理过程而非仅输出结果建立信任。我们揭示了一个关键可靠性危机:这些模型给出的正确答案中,50-69%包含根本性的推理缺陷——这是一种“因错而对”现象,标准准确率指标无法检测。通过对三种模型在多样化任务中的10,734条推理轨迹进行分析,我们提出了推理完整性评分(RIS),这是一种基于过程的评估指标,经大量评分者间一致性验证(κ=0.657)。我们的研究对传统实践提出了挑战:虽然检索增强生成(RAG)显著提升了推理完整性(Cohen's d=0.23–0.93),但元认知干预(如自我批判)在评估任务中常损害小型模型性能(d=-0.14至-0.33)。机理分析表明,RAG通过将计算锚定于外部证据取得成功,使错误率降低7.6%,而元认知在模型容量不足时会放大混淆。为实现部署,我们将验证能力提炼为神经分类器,在实现100倍加速的同时达到0.86的F1分数。这些结果凸显了基于过程的验证对可信智能体的必要性:当模型可能完全基于错误原因得出正确答案时,仅依赖准确率是危险且不足的。