Large language models (LLMs) have demonstrated impressive capabilities in various reasoning tasks, aided by techniques like chain-of-thought prompting that elicits verbalized reasoning. However, LLMs often generate text with obvious mistakes and contradictions, raising doubts about their ability to robustly process and utilize generated rationales. In this work, we investigate reasoning in LLMs through the lens of internal representations, focusing on how these representations are influenced by generated rationales. Our preliminary analysis reveals that while generated rationales improve answer accuracy, inconsistencies emerge between the model's internal representations in middle layers and those in final layers, potentially undermining the reliability of their reasoning processes. To address this, we propose internal consistency as a measure of the model's confidence by examining the agreement of latent predictions decoded from intermediate layers. Extensive empirical studies across different models and datasets demonstrate that internal consistency effectively distinguishes between correct and incorrect reasoning paths. Motivated by this, we propose a new approach to calibrate reasoning by up-weighting reasoning paths with high internal consistency, resulting in a significant boost in reasoning performance. Further analysis uncovers distinct patterns in attention and feed-forward modules across layers, providing insights into the emergence of internal inconsistency. In summary, our results demonstrate the potential of using internal representations for self-evaluation of LLMs. Our code is available at github.com/zhxieml/internal-consistency.
翻译:大型语言模型(LLMs)在各种推理任务中展现出卓越能力,这得益于思维链提示等技术所激发的语言化推理过程。然而,LLMs生成的文本常出现明显错误与矛盾,引发人们对其可靠处理与利用生成依据能力的质疑。本研究通过内部表征的视角探究LLMs的推理机制,重点关注生成依据如何影响这些表征。初步分析表明:虽然生成依据能提升答案准确率,但模型中间层与最终层的内部表征之间存在不一致性,这可能削弱其推理过程的可靠性。为解决此问题,我们提出通过解码中间层潜在预测的一致性来衡量模型置信度,并将其定义为内部一致性。跨模型与数据集的广泛实证研究表明,内部一致性可有效区分正确与错误的推理路径。基于此发现,我们提出通过加权高内部一致性推理路径来校准推理的新方法,从而显著提升推理性能。进一步分析揭示了注意力机制与前馈模块在不同层级间的差异化模式,为内部不一致性的产生机制提供了新见解。综上所述,我们的研究结果证明了利用内部表征实现LLMs自我评估的潜力。相关代码已发布于github.com/zhxieml/internal-consistency。