Large Language Models (LLMs) are able to improve their responses when instructed to do so, a capability known as self-correction. When instructions provide only the task's goal without specific details about potential issues in the response, LLMs must rely on their internal knowledge to improve response quality, a process referred to as intrinsic self-correction. The empirical success of intrinsic self-correction is evident in various applications, but how and why it is effective remains unknown. In this paper, we unveil that intrinsic self-correction can be progressively improved, allowing it to approach a converged state. Our findings are verified in: (1) the scenario of multi-round question answering, by comprehensively demonstrating that intrinsic self-correction can progressively introduce performance gains through iterative interactions, ultimately converging to stable performance; and (2) the context of intrinsic self-correction for enhanced morality, in which we provide empirical evidence that iteratively applying instructions reduces model uncertainty towards convergence, which then leads to convergence of both the calibration error and self-correction performance, ultimately resulting in a stable state of intrinsic self-correction. Furthermore, we introduce a mathematical formulation and a simulation task indicating that the latent concepts activated by self-correction instructions drive the reduction of model uncertainty. Based on our experimental results and analysis of the convergence of intrinsic self-correction, we reveal its underlying mechanism: consistent injected instructions reduce model uncertainty which yields converged, improved performance.
翻译:大型语言模型(LLMs)能够在接收到指令后改进其响应,这种能力被称为自校正。当指令仅提供任务目标而未具体说明响应中可能存在的问题时,LLMs必须依赖其内部知识来提高响应质量,这一过程被称为内在自校正。内在自校正的经验性成功在各种应用中显而易见,但其如何以及为何有效仍属未知。本文揭示了内在自校正能够逐步改进,使其趋近于收敛状态。我们的发现在以下两方面得到验证:(1)在多轮问答场景中,通过全面论证内在自校正能够通过迭代交互逐步引入性能增益,最终收敛至稳定性能;(2)在面向道德强化的内在自校正背景下,我们提供了经验证据表明迭代应用指令会降低模型不确定性直至收敛,进而导致校准误差与自校正性能的收敛,最终实现内在自校正的稳定状态。此外,我们提出了一个数学公式和一项模拟任务,表明自校正指令激活的潜在概念驱动了模型不确定性的降低。基于我们的实验结果以及对内在自校正收敛性的分析,我们揭示了其内在机制:持续注入的指令降低了模型不确定性,从而产生收敛且改进的性能。