Although large language models (LLMs) have achieved remarkable performance across various tasks, they remain prone to errors. A key challenge is enabling them to self-correct. While prior research has relied on external tools or large proprietary models, this work explores self-correction in small language models (SLMs) through iterative fine-tuning using solely self-generated data. We introduce the Self-Taught Self-Correction (STaSC) algorithm, which incorporates multiple algorithmic design choices. Experimental results on a question-answering task demonstrate that STaSC effectively learns self-correction, leading to significant performance improvements. Our analysis further provides insights into the mechanisms of self-correction and the impact of different design choices on learning dynamics and overall performance. To support future research, we release our user-friendly codebase and lightweight models.
翻译:尽管大型语言模型(LLMs)在各种任务中取得了显著性能,它们仍然容易产生错误。一个关键挑战在于使其能够进行自我修正。先前研究多依赖外部工具或大型专有模型,而本工作通过仅使用自生成数据进行迭代微调,探索小语言模型(SLMs)的自我修正能力。我们提出了自学式自我修正(STaSC)算法,该算法融合了多项算法设计策略。在问答任务上的实验结果表明,STaSC能够有效学习自我修正,从而带来显著的性能提升。我们的分析进一步揭示了自我修正的机制,以及不同设计策略对学习动态和整体性能的影响。为支持未来研究,我们开源了用户友好的代码库和轻量化模型。